Tumgik
#but i feel like we might not be exploring the breadth of possibilities afforded by it especially combined with this fun quirk of theirs
funsizedshark · 6 months
Text
oda shouldn't have made it a point of zoro never calling sanji by his name once in the entire manga if he didn't want me to picture a scenario where zoro singlehandedly brings sanji back from the germa influence simply by calling out his name for the first time
937 notes · View notes
graphlink · 4 years
Text
Marketing-Stupid #1
Tumblr media
SOMETIMES IT JUST FEELS LIKE we’re talking to asylum escapees. Because some people simply do not want to benefit themselves, their employees, or their family. Here’s just one story. 
This is one of my favorite stories because it underscores how easy it is to do some really irreparable harm to your own business, for years, without even being aware of it.
GYM OWNER
(NOTE:  It will help you to know that people come to websites via desktop, tablets, and smartphone. Most consumer oriented businesses get most visits from smartphones. A gym’s clientele might be over 95% from smartphones.)
Before our meeting, I did my due diligence and explored the gym owner’s website. I looked at it on my work computer first. It wasn’t great, and it wasn’t bringing him any clients. It was over-run with text, and hard to read, even on a large desktop screen.
Then I pick up my phone to look at the mobile version…
I met the gym owner at his business and got the cook’s tour. A nice operation, but he told me he wasn’t getting enough clientele. He had tried and tried, and nothing had worked. He was just skating by. In fact he told me, if the laundry raises its rates for cleaning the towels, he may no longer skate by. 
I asked him to tell me all the things he’s tried, that have not worked. He told me, among other things, that he had been doing Google ads for a very long time, and had gotten nothing out of it. I asked how long? After pondering for a moment, he replied that he must have done at least 2-3 years of ads, and spent thousands. And never got a single client from it.
Now I’m concerned, because I think I see where this is heading. 
I asked him to “Tell me, how long have you had that website up?” He thought about it and told me it must be up there almost 5 years. This would have put it’s launch in the early 2010’s, a time when most websites were already including mobile versions, but not all.
I asked if he had taken a look at his website lately. He hemmed and hawed, and then he said “Yeah, sure, why?” I asked if he had looked at his MOBILE website recently. He hemmed and hawed more, and shrugged, “It’s been a while, why?”
I said, “You mean it’s been a very LONG while, right?” He grunted that I could be right. So I asked, “Could it be so long a while, that maybe you NEVER actually looked at your business website on your phone? Is that possible?”
I sat there, and watched him. It was literally all unfurling right there in front of me. On his face:  Puzzlement. Confusion. Worry. “W-why….why are you asking me that?” as he reached for his phone and began typing the web address into it.
The problem I had seen when I checked it myself earlier that day, was that he actually didn’t even have a mobile website. It simply did not exist. So his phone, like mine earlier, and everyone’s phone over the last 5 years, all brought up his DESKTOP website instead. And as you will recall, his desktop site was an over-crowded mess even on a large desktop. On a small phone’s screen much of the text was 2-3 pixels tall. In other words, completely illegible.
He looked at his phone for a moment. At first I assumed he got it, but he didn’t. He still didn’t understand the breadth of damage to his business and brand. 
“Oh, what’s wrong with it? Why does it look like that?” he asked. I explained he didn’t actually have a mobile site, and THAT was the reason for his lack of success with Google ads. 
“No, you don’t understand.” he replied. “I didn’t even get ONE phone call from those ads. It was just a TOTAL waste”. He was the one not understanding.
I explained in more detail:  
“When someone sees an online ad, what do they do with it? If interested, they CLICK on it. And where does that click take them? To your website, right? Even if you put your phone number in the ad, almost everyone will first click the ad to learn more. Where? ON YOUR WEBSITE.”
“The problem is, almost all of your potential customers are busy, active and mobile people, who do their gym research between other activities, like while in the back of a cab. Without a mobile site, this is what they are all seeing, unreadable gibberish.” 
“Would you call a company that placed and ad and sent you to a website filled with gibberish? I wouldn’t. Most people wouldn’t. You spent thousands to advertise that you do NOT have your act together.”
It was starting to sink in. And maybe, just maybe, he was starting to guesstimate the amount of money he had wasted in advertising over the years. The amount of hard-earned dollars he might have well just flushed down the toilet. It was painful to watch.
Sometimes, this is where cheap websites take you. And why getting a real BUSINESS website from an experienced marketing firm makes all the difference.
The take-away here is understanding how marketing works, and how all the pieces need to fit.
THE QUESTION IS: How long can you afford to keep losing business by not doing things right?
Sit down with Graphlink Media so we can help you streamline your marketing, and make money.
0 notes
tracisimpson · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
lawrenceseitz22 · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger https://ift.tt/2rJmIjM via IFTTT
0 notes
swunlimitednj · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger https://ift.tt/2IKAbmd via SW Unlimited
0 notes
belleforeman-blog · 6 years
Text
Truth and objectivity in animated documentary
John Grierson described documentary as “the creative treatment of actuality” (Dirk Eitzen p82, 1995). Although this is a widely accepted definition, it is difficult to establish what constitutes “actuality”. As all representations of reality are reflections of a personal viewpoint or belief, and documentaries are often designed to portray a specific message, it could be argued that these depictions are in part fictional. To me, what constitutes a documentary is a representation of real events, people and locations, which tells the story of what actually happens, or has happened in reality. It is valid for a documentary to offer a perspective, interpretation or an argument of it’s subject, as this is a natural consequence of the nature of documentary. However, it is important that the filmmaker represents reality and does not reinterpret it. In this essay, I will explore whether animation follows or defies the fundamental values of truth and objectivity in documentary.
Paul Wells underlines the key issue of animated documentary, which is that the very nature of animation cannot be objective. ‘The very subjectivity involved in producing animation, [...] means that any aspiration towards suggesting reality in animation becomes difficult to execute. For example, the intention to create a ‘documentary’ in animation is inhibited by the fact that the medium cannot be objective’ (Paul Wells p27, 1998). He goes on to say that ‘the medium does enable the film-maker to more persuasively show subjective reality.’ This is a fundamental point. Indeed, suggesting reality through animation may be difficult, but it can be hugely beneficial when used to portray subjective matters. 
Animated documentaries consist of drawing and imagery that a camera literally cannot make, but this does not necessarily render them untruthful. Their content can still be about real people, real locations and real events. Slaves is an animated documentary about two Sudanese children, Abouk aged nine, and Machiek aged fifteen, who were kidnapped and enslaved by government backed militia in southern Sudan. David Aronowitsch and Hanna Heilborn made sensitive visuals that allow the children’s voices to be heard. The film begins with the interview set up, and as the interview progresses, the animation moves out of the room and the children’s terrible experiences are visualised. By establishing the film in the reality of the interview, the experiences of the children are more afflictive. This is highlighted when Abouk and Machiek talk about the abuses they witnessed, for example, of other children being ‘torn apart’ and thrown down a well. The sound recording is accurate and unedited. The typical background noises associated with an interview setup, such as the little sneezes, gives the audience a strong sense of how traumatic and true their experiences are. As a viewer, what you see is a subjective representation, but what you hear is a real child recounting a real experience. Rozenkrantz maintains that the inclusion of the actual recorded voice is critical to guarantee an animated documentary’s success. The success of Slaves is due to this, as it enables the viewer to trust the interviewee’s story. Rozenkrantz also claims that hearing the actual voice of the individual who the memories belong to, fills the gaps and provides credibility (Jonathon Rozenkrantz, 2011). In Slaves, the majority of the interview concerns the children’s memories of their past, of which no footage exists. The animation bridges the gap between the voices of the children and their memories, creating a deeper connection and understanding. Slaves is an exceptional piece of animation which highlights what animation can achieve in the field of nonfiction documentary.
Animation also enables consideration and control over movement. Animators can selectively exclude or add information. They can control the length of time the observer views information and the way it unfolds. Sometimes less is more, as a viewer will gain more from a documentary that is easy to digest and understand. This applies to Slaves, as when the children start to speak of their experiences, we are taken out of the room and into another landscape, where the scenes are not crowed or busy, allowing the children’s stories to come through clearly and making a bigger impact. Although this is not necessarily the case in Slaves, condensing information can result in the filmmaker’s truth being imposed onto the audience, as they may make the decision to leave out something significant which subtlety changes the story. The animators of Slaves have produced a slight shake of the frames, which mimics a camera wobble. This is a small touch, but it undoubtedly adds to the sense of reality. It could be argued that the content of solely voice-recorded interviews are more truthful than that of video-recorded interviews, as film crews and camera equipment can be intimidating to interviewees, who might not feel comfortable to speak openly of their experiences.
Animation can act as a substitute to live-action documentary. Substituting what cannot be captured through film. ‘Animated documentaries offer us an enhanced perspective on reality by presenting to us the world in a breadth and depth that live action alone cannot’ (Jeffrey Skoller, 2011). It can be a very effective tool when depicting the unseen, making the invisible visible. Examples of the unseen are subjective, internal psychological states, or the visualisation of events where no footage or only partial footage exists, such as memories, as demonstrated by Slaves. Although it may be easy to identify ways in which animation can be beneficial to documentary making, it can be challenging to bring this back in relation to truth. All documentaries are constructions, and the viewer discovers the ‘truth’ in a film through the director’s assembly of cinematic choices – choices that inevitably represent the filmmaker’s version of the truth (Peter Biesterfeld, 2016). Indeed, it might be helpful for an audience to see visuals of a subject, but this could potentially detract from the original message, as rather than the audience experiencing the ‘truth’, they are instead viewing the director or creator’s understanding or interpretation of the truth. Arguably, this question can be applied to live-action documentary. As Wolf Koenig stated, “every cut is a lie. Those two shots were never next to each other in time that way. But you're telling a lie in order to tell the truth” (Peter Biesterfeld, 2016). It could be said the animated documentaries are more ‘truthful’ than live-footage documentaries, as they are true to the editing process. No one thinks they are real. Whereas in live-action documentaries, particularly during scenes where the past has been reenacted, people may consider the documentary as untrustworthy. Animation however, asks us to see it for what it is, ‘a construct or representation,’ by introducing a certain transparency and self-consciousness to the mix’ (Beige Adams, 2009).
One of the main strengths of animation is its power to engage the audience. Often, an abstract representation of information is more appealing than a literal one. American filmmaker, Richard Robbins, believes real footage ‘impinges your ability to listen to the story’ (Beige Adams, 2009). As a society, we tend to disengage when presented with genuine footage of pain, suffering and violence.
‘Animation has emerged as an important practice in recent documentary, particularly those that examine life in wartime’ (Tess Takahashi, 2011). Waltz with Bashir is an animated documentary about the Lebanon war and dealing with trauma. The use of animation creates space from the photographic imagery that is necessary in order to observe and take in the information. For topics like trauma, animation seems suitable, as ‘the unconscious must become manifest. The invisible must become visible’ (Carlo Avventi, 2018). The filmmaker, Ari Folman’s experience shapes Waltz with Bashir. It is his attempt to make sense of his time as an Israeli soldier in the Lebanon war in 1982. It is centred around a series of conversations between Folman and other Israeli soldiers, who were in Lebanon on the night that three thousand Palestinian refugees were massacred by Christian militia. Twenty years on, after an old friend tells Folman of a recurring nightmare relating to the war, Folman realises that he has no recollection of his experiences during the war. Ari Folman had previously worked on more conventional, live-action documentaries, but for Waltz with Bashir, animation seemed fitting due to the nature of the content. Folman himself said, ‘with animation you can do anything’ (Carlo Avventi, 2008). The film weaves in and out of dreams, hallucinations and memories, of both his own and other veterans, and it makes it possible to visualise these subjective, internal thoughts. As each man tells his story to Folman, we are taken out of the setting of the conversation and into a frightening, dark landscape. ‘The freedom afforded by animation [..] allows Mr. Folman to blend grimly literal images with surreal flights of fantasy, humour and horror’ (A. O. Scott, 2008).
As in Slaves, the interpretation of the Lebanon war through animation was not a wholly aesthetic decision, ‘with its artificiality, the animation offers the necessary distance to be able to approach the images of such an event’ (Carlo Avventi, 2018). When a subject is too horrific, many people find it too hard to watch live-action, or choose not to watch it as a way of self-protection. Animation can be a useful tool to draw people in without being too gruesome. However, to gain a true understanding of a subject, it may be that the absolute reality needs to be seen. Animation could potentially act as a blanket. Viewers may use its clear subjectivity as a comfort that what they are seeing didn’t really happen. This psychological block is challenging to break. Maybe the most ‘truthful’ documentary is one that uses both animation and live-footage; for animation to slowly introduce an idea to the audience, without being too graphic, then to consolidate its reality, by showing real photographic images or footage. The subject’s reality is then inescapable. This way, the filmmaker is able to show their truth without causing viewers to disengage for reasons of self-protection. This is what Ali Folman did in his animated documentary, Waltz with Bashir. Just before the film finishes, the animation stops, and we are faced with horrifying footage of raw grief and real dead bodies. This indicates ‘just how far Mr. Folman is prepared to go, not in the service of shock for its own sake, but rather in his pursuit of clarity and truth’ (A. O. Scott, 2008). It is also his ‘way of acknowledging that imagination has its limits, and that even the most ambitious and serious work of art will come up short against the brutal facts of life’ (A. O. Scott, 2008). As the audience, when confronted with these graphic images, it is the stark contrast between this and the animation that allows the reality and pain to really hit hard. The live-action footage is the most distinguishable difference between Waltz with Bashir and Slaves and is certainly effective, but it does not necessarily make Waltz with Bashir more truthful. The unedited voice recording of the children’s stories in Slaves serves as a powerful truth.
All documentaries set out to teach us something about the world, and animation can take this further by pushing the boundaries of how and what we learn. There are many things in our world that simply cannot be observed literally. Animation can compensate for the restrictions of live-action. Slaves and Waltz with Bashir both demonstrate the benefits of using animation in documentary making. ‘It is clear that animation is not just a technique or technology, but a necessary mode of representation’ (Jeffrey Skoller, 2011). Indeed, the medium itself is not objective, but it’s ability to bring subjective meaning to life is formidable. In my opinion, if the content is based on real events, real people and real locations, animated documentaries do adhere to the conventional values of truth in reportage, and in some cases, animation can convey information with more success, particularly when describing subjective states. I believe there is just as much discourse with animated documentaries as live-action documentaries when questioning their ‘truth’. Though both are valid, animation is undoubtedly more truthful, as it is transparent to the way it is made, allowing the audience to be more thoughtful and creating a space to make their own judgment.
0 notes
ohzoya-blog1 · 7 years
Text
running shoes jackson wySkarpetkobuty Skinners When I first started out into Triathlon Instruction, I needed no clue the amount of money or the specificity of your gear that we would require. I obviously believed i needed a bicycle, running sneakers, and several apparel, but had not a clue about watches, cycle boots, aero helmets, aero tires, wetsuits, or the other numerous goods. The truth is, my 1st jogging shoes were actually just some I found for a regional athletics retailer for affordable. I just possessed not strategy. As I started getting good focused on my sporting and my operating, I realized which i necessary to obtain some boots that would allow me to possess the best functioning functionality. Again, I found myself at a loss of best places to use commence this procedure. I looked online and searched distinct items about running sneakers. I came across a lot of info particularly about operating marathons or length competitions. I did so not discover a great deal of specifics of precise suggestions for triathlon footwear. I got a number of the thoughts from the info that I located and began the whole process of obtaining sneakers. As you go along, I actually have uncovered a number of different tips for finding boots i would want to reveal in this article. Get In Shape - No this is not a redundant notion of acquiring more healthy, rather get fit to the sort of sneaker you want. Most functioning specific merchants will assist you to explore which shoes could be the good for you. A great number of shops will take you through a series of assessments to find out which shoes suit will probably be best for you. Experiencing performed this several times now, the process goes such as this. First, the sales representative will communicate with you a little bit on what you need to do for jogging, how much time you may have been working, and what your goals and objectives happen to be in functioning. I seem and request for somebody who has been using the retailer some time in order to get the best sort of topic about my wants like a runner specifically since i have concentrate on Triathlon precise functioning. 2nd, they can possibly electronically, by using a particular mat you get up on, or manually, by utilizing the outdated sizing aluminum foundation that is definitely always chilly, they will likely establish your dimensions and breadth of the ft .. Third, they are going to go to the to supply you with up a sample shoe to do some workout in. I always know that the sales representative is going to look for a shoe they love, then i am leery of just latching onto the very first running shoe. The small sample shoes will certainly be a neutral running shoe without having any lift or assistance to help keep your foot straight. 4th, the revenue representative can have you get using a fitness treadmill and run for five-10 minutes when they history the way your feet slip. You will want to use something you can run in comfortably. The previous appointment I needed in this way, the gentleman also saved me through the part to make sure I became attaining the right way. When you finally accomplish this simple functioning appointment, the revenue repetition will watch the video with you. He is looking for supination, the opposite of pronation and means outward roll from the foot during regular action. When you have far too much supination then you simply must have a running shoe which will help to stability your toes. I had quite a simple slip to my foot and this is not a difficulty in my opinion. At one of several areas I have stopped at, they had me get up on a glass display screen that required a size of the force points of my feet. This assisted to understand the arch amount of the feet. Following all this details are compiled the income repetition will pick a shoe that you should try out. This appropriate method is extremely important. Get Choosy - The procedure of choosing your running shoe with the details which has been given for you with the revenue repetition is actually one that you should take the time with. I actually have learned that the selecting of your appearance and feel in the running shoe is extremely important. If you don't like the style of the sneaker, then you certainly will feel fewer than enthused about using it even for running. Should you don't such as experience then you will not jog. This option is important. You might be slightly choosy with regards to the type of shoes, nevertheless for me I want a thing that will give me the help and support and luxury for many kilometers. Most teachers might last you 300-500 kilometers, so select prudently which shoe you are going with. Often, the store will allow you to run about the treadmill or on a small-track inside of the retail store, most will even enable you to go outside to work during the shoes or boots. Consider lots of time to in which you are certain those are the sneakers you might operate in throughout your instruction and sporting. Get More - Yes, get a couple of pair. Normally, some tips i do is choose the right style of running shoe with the retail store. I undoubtedly don't want to have an individual commit a half-hour to a hr to assist me look for a shoes, then I recently go house and purchase it online. That is not real and is a terrible exercise. I truly do having said that, go home and begin to think about other sets. You need to have a very rotation of trainers. Initially when i first started spinning sneakers, I bought 3 the exact same sort of running shoe. A couple of them I swapped backwards and forwards on education days or weeks, then your 3rd was for race. This procedure performed quite nicely, having said that i have due to the fact been instructed that you should have 2-3 pairs of trainers throughout coaching which are different varieties of sneakers. They can have a different quantity of pillow or perhaps a unique amount of decrease. The sneaker fall is the level of fall from the back heel towards the toe. You can get everything from to 10 mm of lower if not more. I am just wanting to have 3 diverse quantities of decrease within my sneakers now. The thought is you use various foot, calf, and calf muscle groups with the unique sums of fall. For that reason, if you utilize different types then you get a much stronger operation. After this you would like to choose a running shoe for auto racing. In case you are performing a run race, you should purchase a sneaker that doesn't have a lot of cushioning therefore is lighter in weight for running. If you are doing a much longer race than you can expect to want far more cushioning to the lengthy distances. The majority of the process of deciding on your race footwear is available via learning from mistakes.
0 notes
ozsaill · 7 years
Text
Cruising the Bahamas: beauty at our back door
Hard won miles to windward from the cerulean blue of our last Bahamian anchorage, some perspective on our months in the islands is sinking in. I went in with a mixed bag of expectations: friends who have sailed around the world claim it’s among the best cruising to be had (don’t we all love our first major destination?). Other cruisers who don’t have that far-reaching basis for comparison rave about it (was there narrower base of comparison at play?). It put me on guard: were we REALLY going to like it that much? How could islands so close to the USA possibly offer that kind of exceptional experience?
Confession: I spent too much of our time there being jaded and just needed to get over it. So what if the Bahamas didn’t measure up in discrete specifics to more exotic locales? On its own merits, the islands are a spectacular cruising ground, and there is a lot to love. These are the reasons it stood out in our experience.
Beautiful water
It’s spectacular. There is almost nothing more to say. We’ve seen a lot of mesmerizing water on our way around the world, and the Bahamas (tie: Bermuda) is at the top of the heap. It’s as though it is lit from within: and it is, in a way, as sunlight reflecting off a white sandy bottom is what lends the vivid blues. Stunning shades of aqua in the winding inner channel of the Exumas are now my benchmark. A gift for cruisers starting out from the US east coast: their first international step can transport them to some of the best! UNDERwater is another story, but we’ll save that for later.
Photos can’t do the colors justice, but offer a suggestion
Close to home
It’s a DAY trip! Sure, there is a meaningful bit of water to cross and the Gulf Stream deserves all the respect and planning you can give it. But at the end of the day, well… at the end of the day in which you depart Florida, you can be relaxing on the hook in Alice Town or West End, and rightfully feel like you have transported yourself a world away to an island paradise where you can beachcomb for intricate shells, paddle in turquoise water, gawk at mountains of conch shells, maybe even swim with dolphins (all features of our point of arrival, Bimini).
How to describe the feeling of being approached by a playful dolphin?
This proximity also helped when Jamie and I had to fly out. I was gone a week for the Annapolis spring boat show; Jamie hopped around Florida and the Caribbean checking out boat listings with a few of our coaching clients. Even in what felt like relatively remote islands, flights were easy to book on relatively short notice and fares weren’t terrible. What a great way to cruise in a place that’s relatively easy to have visitors! And if you’re sailing back to the US, it’s likely to be with the wind at your back…and an easier task to find a date to cross the Gulf Stream in comfort.
Access to stuff
If you came for the sand you’ll be in paradise. If you came for the avocados to make guacamole to go accompany nacho chips that cost $11/bag, then carry on to Puerto Rico!
Sure, you may want to provision up anything you must have; you might not find it and it will cost more when you do. But it’s a corollary of “close to home,” these islands aren’t in the middle of an ocean. They’re regularly supplied by mail boats (or planes). Costs can be eyepopping (especially for our hungry crew…wow the kids were easier to feed when they were little!), but that’s if you’re trying to recreate your Publix shopping cart at a market on Eleuthera. Mitigate expense with advance provisioning or switching your diet to local style: market rates or government subsidy keep many staples affordable. Get out the fishing gear. Shift your habits. Eat on board instead of ashore.
Conch at a pier on Eleuthera: 7 for $10
Ultimately, availability wasn’t as bad as I expected from reports. In George Town, it as possible to get everything from kale to mushrooms and shallots. Markets in Staniel Cay had surprising breadth: asparagus anyone? (thanks I’m sure to the higher-end charters frequenting the area and providing a ready market to supply.)
Bounty after the mail boat: George Town, Great Exuma
If you need boat parts, it’s a little different. People don’t need diesel mechanics the way they need food. But help is there, and parts are just a DHL shipment away. Many corners of the world are a lot more complicated, and lot slower / more costly, if it’s necessary to source and deliver boat bits. So you may have to wait a bit…there are few places that wouldn’t be lovely to be required to wait around!
Easily connected
We started out by using our existing US T-Mobile plans. T-Mobile’s customer service crowed about the 4G we’d be living in the Bahamas, leveraging the BTC cellular network that’s already in place. Well, there was broad coverage. That’s incredible, really, considering the dispersed islands and thin population. But the service was throttled back to 2G. Fine if you’re just checking email, but really not good enough for what it cost. No problem: swapping our T-Mobile SIM card for a BTC SIM was affordable and easy. $15 for the SIM, and during our stay, 15 gigabytes cost only $35 – much better value than our paused T-Mobile plan and about the cheapest per-GB rate yet.
Social scene
Despite being entirely off pace with the seasonal flow of the Bahamas, the islands lived up to their reputation as a social hub for cruisers. Our timing meant that we experienced it on a smaller scale (George Town peaks with more than 300 boats; there were maybe a dozen transients when we came in). But we were able to meet up with “internet friends” passing on the way to the states, and make new friends who, like us, had plans to point to the Caribbean for hurricane season.
An overdue meetup with the Tookish crew, plus friends
But your draft!
US east coasters in particular seem to make a big deal about shallow Bahamas water limiting access to all but shallow draft boats. Depths require attention, but it is NOT a big deal. Shallower draft boats can anchor closer to the beach. Once in a while they can take a shortcut that we can’t, or skip waiting for higher tide. Repeat: it is not a big deal. We draw 6’; we spent time with a boat drawing 7’, neither of us felt compromised in our anchoring or locked out from cool spots.
Siobhan peeks under Totem’s keel: at times we only had a few inches at low tide
Uncomplicated cruising
The Bahamas was largely a straightforward place to cruise. Same language, much of the same cultural context, it’s safe, there are oodles of blogs and other resources to help plan a trip. Currency is 1:1 with the US dollar, and US currency is accepted everywhere. It really does not get much easier! But I can appreciate that for cruisers who are reaching beyond the US coast for the first time, it may feel …not easy. And of course, it’s Not America, and with that may creep in some uncertainty. The cure for that is the Waterway Guide. Updated annually, it includes exhaustive detail to relieve any worries a new cruiser (or, newly international cruiser) might have from the clearance process (an overall view and details what to do / where to go at each port of entry) to understanding the unique dynamics of the tide in the Bahamas (they have a great description that helped it make perfect sense to me) – along with all that normal logistical guide stuff of places to go, conch shacks to patronize, and reefs to snorkel. It’s the only book you need.
Late-season flock anchored off Monument Beach, George Town
The same folks who think you need shoal draft boats to cruise the Bahamas warn about bad charts and currents and tides and dragons. Dunno about the dragons but just like depth, current/tide merely requires attention. It’s not unduly complicated, but may be new for boaters accustomed to channel markers wherever you might need them and aids to navigation for any hazard. Possibly that’s why the Explorer charts have developed an otherwise puzzling cult following. After being at the receiving end a mountain of FUD, we finally conceded to buy a set. They WERE good charts, but along our winding path from Bimini through the Exumas to Great Inagua, Navionics charts (used with the iNavX app) were pretty much spot on (save a few places where we found more depth than they indicated). And speaking of FUD, that’s what Explorer throws at boaters who just want to anchor. In one anchorage after another Explorer reported bad holding where we set the hook very well, thank you. They also advertise a lot of marinas…
We maxed out the three months we were granted on entry to the Bahamas. What we didn’t max out where the opportunities to explore. Always good to leave something wanting? One aspect is certain: the further away from the US we got, the better we liked the Bahamas. Had our earlier plans not relied on pauses and airports while Jamie and I took care of business, I kinda think we might have tipped over into full-fledged the Bahamas cheerleaders. There were just a few things that held us back, though, and that’s the next post.
Drones-eye-view to the north at Stocking Island, Exumas
from Sailing Totem http://ift.tt/2t82JuE via IFTTT
0 notes
greatdrams · 7 years
Text
Whisky Live London 2017: An Overview
“You take this very seriously, don’t you?” commented Dave Worthington as a sample of Boutique-y Whisky’s Glenlossie 25 year old found its way into the spittoon.
“I feel bad,” I apologised; “it’s such lovely whisky, but there are too many to get through – I don’t want to miss out!”
It’s midway through Saturday afternoon, and within the venerable bowels of the Honourable Artillery Company, London’s Whisky Live is in full swing.
Let’s linger on that apology – and confession – first of all. They’re words I find myself repeating relentlessly at whisk(e)y festivals, and I do genuinely worry sometimes that I’m causing offence, but yes – I use the spittoon. For everything. Occasionally there’ll be a stand that doesn’t have a spittoon, and when that happens I find myself yo-yo-ing back and forth to a stand that does. Which feels even worse: “excuse me, I’m not here to try your whisky right now, I just want to spit in your bucket.” (I don’t actually vocalise that, obviously. It’s just the unspoken implication.)
In any case, I didn’t think it was that unusual. Surely, I thought, that’s what everyone does, and that’s what the spittoons are there for. But as I was chatting away to Ibon, Loch Lomond’s brand ambassador, he remarked: “in two days you’re the first person I’ve seen do that.”
Great, now I feel even worse.
So here, officially set down, is my apology to all brand ambassadors and whisk(e)y makers to whom I have caused unintentional offence. Rest assured – I love your product. But I’m afraid I won’t stop using the spittoons. Because as much fun as whisky festivals are, Dave is right – I do take them seriously. I’m there to learn.
After all, how often do you have open access to such a broad spectrum of distilled barley juice? Of course there’s no way to get through it all; there wouldn’t be time, and it’d knacker your palate anyway. So – and feel free to laugh all you want at what a loser I am – my festivals usually start with a lap or two of just figuring out where everything is, and what I haven’t tried previously.
And after five hours of tasting, what did I learn?
Well, I said it after the Whisky Show in October, and I’ve already gushed it on twitter, but I’ll say it again: Boutique-y Whisky has, for me, become the stall to beat at these festivals. The breadth of their range is just stunning; malt, grain, blends, rye; Scotland, Ireland, US, Holland. And it doesn’t hurt that with the likes of Dave Worthington and James Goggin they’ve more or less got the A-team of brand ambassadors too. Unsurprisingly the near-opaque Springbank 21 on their table was emptying as fast as they could pour it, but my favourites were that Glenlossie 25 and an outrageously tropical 24 year old Irish. I’ve put a star next to their two blended whiskies as well, for what that’s worth. (Nothing, I expect.)
In fact the independents generally provided some of my top picks of the day, because Murray McDavid were on sparkling form too. Their table’s showpieces were the 26 year old Bowmore and a 48 year old Tomintoul, but the one that really stuck in my mind was their 10 year old Rìgh Seumas I 2004 blended malt. Possibly because I can actually afford it, but it genuinely offers seriously good whisky for the price. And you know I love a good blended malt...
Have to mention the 28 year old Blair Athol from Milroy’s whilst I’m at it. Not a distillery I always get along terribly well with – this could be the best whisky of theirs I’ve ever tried. Superb balance of cask, distillate and maturity. My pick of the festival? It’d be awfully close.
A highlight for me was finally trying the Port Askaig range next to the Elements of Islay. I’ve occasionally wondered quite why Speciality Drinks needed two separate ranges of peated whisky, and what the real points of difference were beyond Port Askaig stating age and Elements stating distillery. Why not lump them all together, I wondered. But chatting to Billy Abbott and tasting them side by side, my conclusion is that the Port Askaig goes in a more elegant direction, whilst Elements is best at conjuring up Islay’s more rumbustious and brutish side. They’re two very different styles, and there’s definitely space for both. That’s just my opinion of course, and you’re more than welcome to disagree. In any case, I prefer the ‘Peat’ expression from Elements to Port Askaig’s 100 proof, but my favourite of the table was Port Askaig’s 15 year old.
What else? I hope Leonardo DiCaprio picked up a bottle of Deanston’s 18 year old on his visit, because it’s a cracker. Bunnahabhain’s Moine Oloroso is brooding, burly and boisterous in an almost over-the-top but fantastically fun way – smoke and sherry battling it out, rather than one swamping the other. Nikka 12 is absolutely delicious...but perhaps not to the degree that it’s worth paying twice the price of From the Barrel. I still love Paul John and I need a bottle of their Peated Select Cask in my life. (That one actually might have edged the Blair Athol as my pick of what I tasted – it deserves all the awards it’s won.)
I mustn’t forget the Canadians. J.P. Wiser’s Dissertation – what a revelation! I have absolutely no idea whether you can get it in the UK; guttingly I suspect the answer is that you can’t. But if you come across a bottle and are a fan of rye-led American whiskies then don’t hesitate for a moment. Same goes for the deliciously complex Gooderham & Worts Four Grain. Canada: please may we have some of this? Pretty please?
Oh, and courtesy of the El Dorado table (especially the sumptuous 21 year old) I’m worried that I might be getting into rum.
A few other thoughts. It got a bit crammed. It’s a lovely space, but it does become a little cattle-marketesque, especially when a fresh wave of eager imbibers arrive midway through. Possibly this is just perspective on how lucky the Whisky Show is in its Old Billingsgate venue, but it’s worth considering as a customer. Basically, arrive early and do the popular stalls first. Or you’ll be waiting. A lot.
I’d like to see a few more tables for whiskies from around the world. F.E.W, Michter’s and Balcones were the only obvious US options, plus Japan’s Nikka, Ireland’s Teeling, South Africa’s Bain’s Cape Mountain, Sweden’s Mackmyra and Box, Taiwan’s Kavalan, and India’s Paul John and Amrut. (There may be others that slipped by me, for which I apologise.) Don’t get me wrong – that’s a very strong selection, and with a room full of delicious whiskies – more than enough to be getting on with – this seems a slightly silly criticism. But Whisky really is global now, and whilst I love Scotch whisky as much as (if not more than) the next man, festivals like this ought to be the best place to explore the full breadth of what’s on offer. As I say, this is only a slight criticism – and the ‘world’ options (not a fan of that term...) are definitely increasing. I’d just like to see that continue really. Probably I’m just fussy.
Cocktails and crackers are a fantastic shout. A gin and tonic was an absolutely perfect palate respite midway through, and crackers/water biscuits are definitely the best sponges.
Which more or less concludes my roundup. As ever with these festivals there were tables I missed that I’d have loved to get to, and tables I did get to that I haven’t mentioned here – so my apologies to those I’ve overlooked. Too many whiskies, too little time! I’m also sorry for the paucity of photos – for some reason I more or less completely forgot!
The bottom line is that I had an absolutely super afternoon, that Whisky Live was a triumph, that the whiskies presented were of a very high average standard indeed, and that if you missed it you should save the date for next year.
And that I’m sorry for using the spittoons. Sort of.
Cheers!
My five favourite whiskies of those I tasted (for what it’s worth; and remember ‘favourite’ doesn’t mean ‘best’):
That Boutique-y Whisky Company Glenlossie 25 years old
Milroy’s of Soho Blair Athol 28 years old
Paul John Peated Select Cask
J.P. Wiser’s Dissertation
Port Askaig 15 years old
    The post Whisky Live London 2017: An Overview appeared first on GreatDrams.
from GreatDrams http://ift.tt/2o8i3bc Greg
0 notes
kountrykravings · 7 years
Text
Floor
Home floor is vital to think about – all things considered, the kitchen is normally the busiest place inside your home. Before you choose the perfect flooring alternative, furthermore consider exactly what the principal utilization of the flooring will soon be. If youare looking for a flooring option that’s easy to maintain, consider exploring our stain-resistant rug collection If you’re interested in a natural floor search, but seeking an even more tough option, check out our collection of longlasting, scratch-resistant laminate flooring that’s made to bring you that comfortable wood look while lasting through the wear and tear of one’s household.
Although it might be more challenging to scrub and maintain than different flooring alternatives, flooring has the benefits of muffling sound and keeping in warmth. Our wide selection of wood flooring means that wont break the bank, and that you discover a method you love, that may look good in your room. Discover your ideal flooring alternative in the UK’s top property floor store – Carpetright. Use our flooring buying guide to find out which characteristics are most important to take into account. Catching-up after her installing Kensington Manor Glacier Top Poplar with Nicole handscraped floor that is laminate CORP, in Fort Collins from Lumber Liquidators. Hardwood flooring is normally easyto clean and offers a number of design alternatives.
In addition to finding Rankings for that newest versions, it’s simple to look online utilizing an advertisement-free screen where you can obtain flooring in a safe-buying atmosphere. There are lots of other options for you really to examine, also though wood floors are a lovely addition to any family. Reward LVT provides tile style possibilities and affordable and lovely timber without compromising quality. Select Materials Canyon Maple laminate floor looks and feels as though true hardwood with its hand- the embossment in register as well as scraped end. We offer flooring reviews and unbiased ratings to assist you pick the best floor for your requirements. Plastic flooring can be provided by us for kitchens laundries and high-traffic hallways.
Make sure you will find no moisture difficulties with the slab before adding any floor selection. Based on your alternatives, a Floor Finder will discover the floor classification that matches with your lifestyle and budget requirements. Non structural floor might apply to installing any product completely fitted as an upper level and carrying surface of the ground and is not load bearing. Sears Home Services offers a variety of floor supplies including carpet hardwood, laminate and luxurious vinyl, each with their own set of shades and habits. Stained Concrete Surfaces Acquire qualified guidelines and design suggestions for applying stains to improve concrete floors.
This may induce you to subsequently contact one-of our shops and obtain them to provide our expert Shop at home service where you could watch our comprehensive range at home atmosphere – where the color and style of our flooring samples could fit your furniture, screen furnishings and area colors.
Whether you have made a decision to mount tough rug in your living room, tough that is beautiful to perform hardwood that is sturdy or your home renovation to beautify your bathroom, our extremely-experienced installation workforce can make sure the task is done for your satisfaction.
The company’s reliable flooring is created entirely in the United States, And also a considerable quantity of engineered floor. While stains are the most typical, additional coloring alternatives for real floor contains painting, the application of colored sealers along with dying. To determine howmuch flooring you’ll need, gauge the square footage of the bedroom by growing its duration times its breadth. Keep up-to-date to the latest design developments, new products and special offers to the finest flooring around. One of many principal benefits of solid-wood floor is that it is long lasting and resilient. Another thing that attracts business to real floor is its ease of preservation.
Reward Rigid Plus Flooring is waterproof using feel and the look of real-wood. Parquet flooring or engineered lumber are wonderful strategies to provide wood in to the residence in a alternative way, and laminate remains a firm, economical favorite. Fair Trading inside the forseeable future will contacts anybody who presently contains a floor permit for the purpose of adding structural floors to discuss migration to some new permit school, or it is possible to contact Fair Trading on 13-32 20.
Sub-floors have to be clean, dry, structurally sound and level, and many companies advise utilizing a specialist experienced inside the floor being employed (specifically for natural stone). Most of the people in our stores are certainly one of your pals, or natives inside your area, most likely we will recognize you, or family. Plastic floor has evolved considerably from the aged linoleum that was unpleasant and cheap, changing reliable, into a beautiful And very resilient floor alternative for professional spots and houses, practices. Cork flooring can be a floor material made of the byproduct of the cork pine tree Cork floors are considered to be eco-friendly every eight to ten years, because the cork oak tree bark is stripped and does not hurt the tree.
Wood floor can help a room feel more sophisticated and formal, but it can also suit effectively in an informal place, especially if matched with all the appropriate area rug. Loose Lay plastic can be stepped on immediately and is available in a broad variety of organic wood and hardwood textures. I have been a client for decades simply because they truly set the customer’s requirements first and that I often recommend Flooring America. Organic components such as timber and jewel flooring is always a well known option and lends authenticity into a system. Your people work their particular merchants and so they’ve been doing it for decades, for making a range for helping our clients generating them trusted and experienced. Once cut to size, the procedure of our solid wood flooring implies that the lumber is taken straight from your saw mill for the factory for concluding and drying. We can also supply industrial plastic flooring for offices, stores. browse around here
The post Floor appeared first on Kountry Kravings.
from Kountry Kravings http://www.kountrykravings.com/flooring/floor/
0 notes
lawrenceseitz22 · 6 years
Text
A Machine Learning Guide for Average Humans
Posted by alexis-sanders
//<![CDATA[ (function($) { // code using $ as alias to jQuery $(function() { // Hide the hypotext content. $('.hypotext-content').hide(); // When a hypotext link is clicked. $('a.hypotext.closed').click(function (e) { // custom handling here e.preventDefault(); // Create the class reference from the rel value. var id = '.' + $(this).attr('rel'); // If the content is hidden, show it now. if ( $(id).css('display') == 'none' ) { $(id).show('slow'); if (jQuery.ui) { // UI loaded $(id).effect("highlight", {}, 1000); } } // If the content is shown, hide it now. else { $(id).hide('slow'); } }); // If we have a hash value in the url. if (window.location.hash) { // If the anchor is within a hypotext block, expand it, by clicking the // relevant link. console.log(window.location.hash); var anchor = $(window.location.hash); var hypotextLink = $('#' + anchor.parents('.hypotext-content').attr('rel')); console.log(hypotextLink); hypotextLink.click(); // Wait until the content has expanded before jumping to anchor. //$.delay(1000); setTimeout(function(){ scrollToAnchor(window.location.hash); }, 1000); } }); function scrollToAnchor(id) { var anchor = $(id); $('html,body').animate({scrollTop: anchor.offset().top},'slow'); } })(jQuery); //]]>
Machine learning (ML) has grown consistently in worldwide prevalence. Its implications have stretched from small, seemingly inconsequential victories to groundbreaking discoveries. The SEO community is no exception. An understanding and intuition of machine learning can support our understanding of the challenges and solutions Google's engineers are facing, while also opening our minds to ML's broader implications.
The advantages of gaining an general understanding of machine learning include:
Gaining empathy for engineers, who are ultimately trying to establish the best results for users
Understanding what problems machines are solving for, their current capabilities and scientists' goals
Understanding the competitive ecosystem and how companies are using machine learning to drive results
Preparing oneself for for what many industry leaders call a major shift in our society (Andrew Ng refers to AI as a "new electricity")
Understanding basic concepts that often appear within research (it's helped me with understanding certain concepts that appear within Google Brain's Research)
Growing as an individual and expanding your horizons (you might really enjoy machine learning!)
When code works and data is produced, it's a very fulfilling, empowering feeling (even if it's a very humble result)
I spent a year taking online courses, reading books, and learning about learning (...as a machine). This post is the fruit borne of that labor -- it covers 17 machine learning resources (including online courses, books, guides, conference presentations, etc.) comprising the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). I've also added a summary of "If I were to start over again, how I would approach it."
This article isn't about credit or degrees. It's about regular Joes and Joannas with an interest in machine learning, and who want to spend their learning time efficiently. Most of these resources will consume over 50 hours of commitment. Ain't nobody got time for a painful waste of a work week (especially when this is probably completed during your personal time). The goal here is for you to find the resource that best suits your learning style. I genuinely hope you find this research useful, and I encourage comments on which materials prove most helpful (especially ones not included)! #HumanLearningMachineLearning
Executive summary:
Here's everything you need to know in a chart:
Machine Learning Resource
Time (hours)
Cost ($)
Year
Credibility
Code
Math
Enjoyability
Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to
2
$0
'17
{ML} Recipes with Josh Gordon Playlist
2
$0
'16
Machine Learning Crash Course
15
$0
'18
OCDevel Machine Learning Guide Podcast
30
$0
'17-
Kaggle's Machine Learning Track (part 1)
6
$0
'17
Fast.ai (part 1)
70
$70*
'16
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
20
$25
'17
Udacity's Intro to Machine Learning (Kate/Sebastian)
60
$0
'15
Andrew Ng's Coursera Machine Learning
55
$0
'11
iPullRank Machine Learning Guide
3
$0
'17
Review Google PhD
2
$0
'17
Caltech Machine Learning on iTunes
27
$0
'12
Pattern Recognition & Machine Learning by Christopher Bishop
150
$75
'06
N/A
Machine Learning: Hands-on for Developers and Technical Professionals
15
$50
'15
Introduction to Machine Learning with Python: A Guide for Data Scientists
15
$25
'16
Udacity's Machine Learning by Georgia Tech
96
$0
'15
Machine Learning Stanford iTunes by Andrew Ng
25
$0
'08
N/A
*Free, but there is the cost of running an AWS EC2 instance (~$70 when I finished, but I did tinker a ton and made a Rick and Morty script generator, which I ran many epochs [rounds] of...)
Here's my suggested program:
1. Starting out (estimated 60 hours)
Start with shorter content targeting beginners. This will allow you to get the gist of what's going on with minimal time commitment.
Commit three hours to Jason Maye's Machine Learning 101 slidedeck: 2 years of headbanging, so you don't have to.
Commit two hours to watch Google's {ML} Recipes with Josh Gordon YouTube Playlist.
Sign up for Sam DeBrule's Machine Learnings newsletter.
Work through Google's Machine Learning Crash Course.
Start listening to OCDevel's Machine Learning Guide Podcast (skip episodes 1, 3, 16, 21, and 26) in your car, working out, and/or when using hands and eyes for other activities.
Commit two days to working through Kaggle's Machine Learning Track part 1.
2. Ready to commit (estimated 80 hours)
By this point, learners would understand their interest levels. Continue with content focused on applying relevant knowledge as fast as possible.
Commit to Fast.ai 10 hours per week, for 7 weeks. If you have a friend/mentor that can help you work through AWS setup, definitely lean on any support in installation (it's 100% the worst part of ML).
Acquire Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, and read the first two chapters immediately. Then use this as supplemental to the Fast.ai course.
3. Broadening your horizons (estimated 115 hours)
If you've made it through the last section and are still hungry for more knowledge, move on to broadening your horizons. Read content focused on teaching the breadth of machine learning -- building an intuition for what the algorithms are trying to accomplish (whether visual or mathematically).
Start watching videos and participating in Udacity's Intro to Machine Learning (by Sebastian Thrun and Katie Malone).
Work through Andrew Ng's Coursera Machine Learning course.
Your next steps
By this point, you will already have AWS running instances, a mathematical foundation, and an overarching view of machine learning. This is your jumping-off point to determine what you want to do.
You should be able to determine your next step based on your interest, whether it's entering Kaggle competitions; doing Fast.ai part two; diving deep into the mathematics with Pattern Recognition & Machine Learning by Christopher Bishop; giving Andrew Ng's newer Deeplearning.ai course on Coursera; learning more about specific tech stacks (TensorFlow, Scikit-Learn, Keras, Pandas, Numpy, etc.); or applying machine learning to your own problems.
Why am I recommending these steps and resources?
I am not qualified to write an article on machine learning. I don't have a PhD. I took one statistics class in college, which marked the first moment I truly understood "fight or flight" reactions. And to top it off, my coding skills are lackluster (at their best, they're chunks of reverse-engineered code from Stack Overflow). Despite my many shortcomings, this piece had to be written by someone like me, an average person.
Statistically speaking, most of us are average (ah, the bell curve/Gaussian distribution always catches up to us). Since I'm not tied to any elitist sentiments, I can be real with you. Below contains a high-level summary of my reviews on all of the classes I took, along with a plan for how I would approach learning machine learning if I could start over. Click to expand each course for the full version with notes.
In-depth reviews of machine learning courses:
Starting out
Jason Maye's Machine Learning 101 slidedeck: 2 years of head-banging, so you don't have to ↓
Need to Know: A stellar high-level overview of machine learning fundamentals in an engaging and visually stimulating format.
Loved:
Very user-friendly, engaging, and playful slidedeck.
Has the potential to take some of the pain out of the process, through introducing core concepts.
Breaks up content by beginner/need-to-know (green), and intermediate/less-useful noise (specifically for individuals starting out) (blue).
Provides resources to dive deeper into machine learning.
Provides some top people to follow in machine learning.
Disliked:
That there is not more! Jason's creativity, visual-based teaching approach, and quirky sense of humor all support the absorption of the material.
Lecturer:
Jason Mayes:
Senior Creative Technologist and Research Engineer at Google
Masters in Computer Science from University of Bristols
Personal Note: He's also kind on Twitter! :)
Links:
Machine Learning 101 slide deck
Tips on Watching:
Set aside 2-4 hours to work through the deck once.
Since there is a wealth of knowledge, refer back as needed (or as a grounding source).
Identify areas of interest and explore the resources provided.
{ML} Recipes with Josh Gordon ↓
Need to Know: This mini-series YouTube-hosted playlist covers the very fundamentals of machine learning with opportunities to complete exercises.
Loved:
It is genuinely beginner-focused.
They make no assumption of any prior knowledge.
Gloss over potentially complex topics that may serve as noise.
Playlist ~2 hours
Very high-quality filming, audio, and presentation, almost to the point where it had its own aesthetic.
Covers some examples in scikit-learn and TensorFlow, which felt modern and practical.
Josh Gordon was an engaging speaker.
Disliked:
I could not get Dockers on Windows (suggested package manager). This wasn't a huge deal, since I already had my AWS setup by this point; however, a bit of a bummer since it made it impossible to follow certain steps exactly.
Issue: Every time I tried to download (over the course of two weeks), the .exe file would recursively start and keep spinning until either my memory ran out, computer crashed, or I shut my computer down. I sent this to Docker's Twitter account to no avail.
Lecturer:
Josh Gordon:
Developer Advocate for at TensorFlow at Google
Leads Machine Learning advocacy at Google
Member of the Udacity AI & Data Industry Advisory Board
Masters in Computer Science from Columbia University
Links:
Hello World - Machine Learning Recipes #1 (YouTube)
GitHub: Machine Learning Recipes with Josh Gordon
Tips on Watching:
The playlist is short (only ~1.5 hours screen time). However, it can be a bit fast-paced at times (especially if you like mimicking the examples), so set aside 3-4 hours to play around with examples and allow time for installation, pausing, and following along.
Take time to explore code labs.
Google's Machine Learning Crash Course with TensorFlow APIs ↓
Need to Know: A Google researcher-made crash course on machine learning that is interactive and offers its own built-in coding system!
Loved:
Different formats of learning: high-quality video (with ability to adjust speed, closed captioning), readings, quizzes (with explanations), visuals (including whiteboarding), interactive components/ playgrounds, code lab exercises (run directly in your browser (no setup required!))
Non-intimidating
One of my favorite quotes: "You don't need to understand the math to be able to take a look at the graphical interpretation."
Broken down into digestible sections
Introduces key terms
Disliked:
N/A
Lecturers:
Multiple Google researchers participated in this course, including:
Peter Norvig
Director of Research at Google Inc.
Previously he directed Google's core search algorithms group.
He is co-author of Artificial Intelligence: A Modern Approach
D. Sculley
Senior Staff Software Engineer at Google
KDD award-winning papers
Works on massive-scale ML systems for online advertising
Was part of a research ML paper on optimizing chocolate chip cookies
According to his personal website, he prefers to go by "D."
Cassandra Xia
Programmer, Software Engineer at Google
She has some really cool (and cute) projects based on learning statistics concepts interactively
Maya Gupta
Leads Glassbox Machine Learning R&D team at Google
Associate Professor of Electrical Engineering at the University of Washington (2003-2012)
In 2007, Gupta received the PECASE award from President George Bush for her work in classifying uncertain (e.g. random) signals
Gupta also runs Artifact Puzzles, the second-largest US maker of wooden jigsaw puzzles
Sally Goldman
Research Scientist at Google
Co-author of A Practical Guide to Data Structures and Algorithms Using Java
Numerous journals, classes taught at Washington University, and contributions to the ML community
Links:
Machine Learning Crash Course
Tips on Doing:
Actively work through playground and coding exercises
OCDevel's Machine Learning Guide Podcast ↓
Need to Know: This podcast focuses on the high-level fundamentals of machine learning, including basic intuition, algorithms, math, languages, and frameworks. It also includes references to learn more on each episode's topic.
Loved:
Great for trips (when traveling a ton, it was an easy listen).
The podcast makes machine learning fun with interesting and compelling analogies.
Tyler is a big fan of Andrew Ng's Coursera course and reviews concepts in Coursera course very well, such that both pair together nicely.
Covers the canonical resources for learning more on a particular topic.
Disliked:
Certain courses were more theory-based; all are interesting, yet impractical.
Due to limited funding the project is a bit slow to update and has less than 30 episodes.
Podcaster:
Tyler Renelle:
Machine learning engineer focused on time series and reinforcement
Background in full-stack JavaScript, 10 years web and mobile
Creator of HabitRPG, an app that treats habits as an RPG game
Links:
Machine Learning Guide podcast
Machine Learning Guide podcast (iTunes)
Tips on Listening:
Listen along your journey to help solidify understanding of topics.
Skip episodes 1, 3, 16, 21, and 26 (unless their topics interest and inspire you!).
Kaggle Machine Learning Track (Lesson 1) ↓
Need to Know: A simple code lab that covers the very basics of machine learning with scikit-learn and Panda through the application of the examples onto another set of data.
Loved:
A more active form of learning.
An engaging code lab that encourages participants to apply knowledge.
This track offers has a built-in Python notebook on Kaggle with all input files included. This removed any and all setup/installation issues.
Side note: It's a bit different than Jupyter notebook (e.g., have to click into a cell to add another cell).
Each lesson is short, which made the entire lesson go by very fast.
Disliked:
The writing in the first lesson didn't initially make it clear that one would need to apply the knowledge in the lesson to their workbook.
It wasn't a big deal, but when I started referencing files in the lesson, I had to dive into the files in my workbook to find they didn't exist, only to realize that the knowledge was supposed to be applied and not transcribed.
Lecturer:
Dan Becker:
Data Scientist at Kaggle
Undergrad in Computer Science, PhD in Econometrics
Supervised data science consultant for six Fortune 100 companies
Contributed to the Keras and Tensorflow libraries
Finished 2nd (out of 1353 teams) in $3 million Heritage Health Prize data mining competition
Speaks at deep learning workshops at events and conferences
Links:
https://www.kaggle.com/learn/machine-learning
Tips on Doing:
Read the exercises and apply to your dataset as you go.
Try lesson 2, which covers more complex/abstract topics (note: this second took a bit longer to work through).
Ready to commit
Fast.ai (part 1 of 2) ↓
Need to Know: Hands-down the most engaging and active form of learning ML. The source I would most recommend for anyone (although the training plan does help to build up to this course). This course is about learning through coding. This is the only course that I started to truly see the practical mechanics start to come together. It involves applying the most practical solutions to the most common problems (while also building an intuition for those solutions).
Loved:
Course Philosophy:
Active learning approach
"Go out into the world and understand underlying mechanics (of machine learning by doing)."
Counter-culture to the exclusivity of the machine learning field, focusing on inclusion.
"Let's do shit that matters to people as quickly as possible."
Highly pragmatic approach with tools that are currently being used (Jupyter Notebooks, scikit-learn, Keras, AWS, etc.).
Show an end-to-end process that you get to complete and play with in a development environment.
Math is involved, but is not prohibitive. Excel files helped to consolidate information/interact with information in a different way, and Jeremy spends a lot of time recapping confusing concepts.
Amazing set of learning resources that allow for all different styles of learning, including:
Video Lessons
Notes
Jupyter Notebooks
Assignments
Highly active forums
Resources on Stackoverflow
Readings/resources
Jeremy often references popular academic texts
Jeremy's TEDx talk in Brussels
Jeremy really pushes one to do extra and put in the effort by teaching interesting problems and engaging one in solving them.
It's a huge time commitment; however, it's worth it.
All of the course's profits are donated.
Disliked:
Overview covers their approach to learning (obviously I'm a fan!). If you're already drinking the Kool-aid, skip past.
I struggled through the AWS setup (13-minute video) for about five hours (however, it felt so good when it was up and running!).
Because of its practicality and concentration on solutions used today to solve popular problem types (image recognition, text generation, etc.), it lacks breadth of machine learning topics.
Lecturers:
Jeremy Howard:
Distinguished Research Scientist at the University of San Francisco
Faculty member at Singularity University
Young Global Leader with the World Economic Forum
Founder of Enlitic (the first company to apply deep learning to medicine)
Former President and Chief Scientist of the data science platform Kaggle
Rachel Thomas:
PhD in Math from Duke
One of Forbes' "20 Incredible Women Advancing AI Research"
Researcher-in-residence at the University of San Francisco Data Institute
Teaches in the Masters in Data Science program
Links:
http://course.fast.ai/start.html
http://wiki.fast.ai/index.php/Main_Page
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
Tips on Doing:
Set expectations with yourself that installation is going to probably take a few hours.
Prepare to spend about ~70 hours for this course (it's worth it).
Don't forget to shut off your AWS instance.
Balance out machine learning knowledge with a course with more breadth.
Consider giving part two of the Fast.ai program a shot!
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems ↓
Need to Know: This book is an Amazon best seller for a reason. It covers a lot of ground quickly, empowers readers to walk through a machine learning problem by chapter two, and contains practical up-to-date machine learning skills.
Loved:
Book contains an amazing introduction to machine learning that briskly provides an overarching quick view of the machine learning ecosystem.
Chapter 2 immediately walks the reader through an end-to-end machine learning problem.
Immediately afterwards, Aurélien pushes a user to attempt to apply this solution to another problem, which was very empowering.
There are review questions at the end of each chapter to ensure on has grasped the content within the chapter and to push the reader to explore more.
Once installation was completed, it was easy to follow and all code is available on GitHub.
Chapters 11-14 were very tough reading; however, they were a great reference when working through Fast.ai.
Contains some powerful analogies.
Each chapter's introductions were very useful and put everything into context. This general-to-specifics learning was very useful.
Disliked:
Installation was a common source of issues during the beginning of my journey; the text glided over this. I felt the frustration that most people experience from installation should have been addressed with more resources.
Writer:
Aurélien Géron:
Led the YouTube video classification team from 2013 to 2016
Currently a machine Learning consultant
Founder and CTO of Wifirst and Polyconseil
Published technical books (on C++, Wi-Fi, and Internet architectures)
Links:
https://www.amazon.com/_/dp/1491962291?tag=oreilly20-20
http://shop.oreilly.com/product/0636920052289.do
https://github.com/ageron/handson-ml
Tips on Using:
Get a friend with Python experience to help with installation.
Read the introductions to each chapter thoroughly, read the chapter (pay careful attention to code), review the questions at the end (highlight any in-text answer), make a copy of Aurélien's GitHub and make sure everything works on your setup, re-type the notebooks, go to Kaggle and try on other datasets.
Broadening your horizons
Udacity: Intro to Machine Learning (Kate/Sebastian) ↓
Need to Know: A course that covers a range of machine learning topics, supports building of intuition via visualization and simple examples, offers coding challenges, and a certificate (upon completion of a final project). The biggest challenge with this course is bridging the gap between the hand-holding lectures and the coding exercises.
Loved:
Focus on developing a visual intuition on what each model is trying to accomplish.
This visual learning mathematics approach is very useful.
Cover a vast variety and breadth of models and machine learning basics.
In terms of presenting the concept, there was a lot of hand-holding (which I completely appreciated!).
Many people have done this training, so their GitHub accounts can be used as reference for the mini-projects.
Katie actively notes documentation and suggests where viewers can learn more/reference material.
Disliked:
All of the conceptual hand-holding in the lessons is a stark contrast to the challenges of installation, coding exercises, and mini-projects.
This is the first course started and the limited instructions on setting up the environment and many failed attempts caused me to break down crying at least a handful of times.
The mini-projects are intimidating.
There is extra code added to support the viewers; however, it's done so with little acknowledgement as to what it's actually doing. This made learning a bit harder.
Lecturer:
Caitlin (Katie) Malone:
Director of Data Science Research and Development at Civis Analytics
Stanford PhD in Experimental Particle Physics
Intern at Udacity in summer 2014
Graduate Researcher at the SLAC National Accelerator Laboratory
https://www6.slac.stanford.edu/
Podcaster with Ben Jaffe (currently Facebook UI Engineer and a music aficionado) on a machine learning podcast Linear Digressions (100+ episodes)
Sebastian Thrun:
CEO of the Kitty Hawk Corporation
Chairman and co-founder of Udacity
One of my favorite Sebastian quotes: "It occurred to me, I could be at Google and build a self-driving car, or I can teach 10,000 students how to build self-driving cars."
Former Google VP
Founded Google X
Led development of the robotic vehicle Stanley
Professor of Computer Science at Stanford University
Formerly a professor at Carnegie Mellon University.
Links:
https://www.udacity.com/course/intro-to-machine-learning--ud120
Udacity also offers a next step, the Machine Learning Engineer Nanodegree, which will set one back about $1K.
Tips on Watching:
Get a friend to help you set up your environment.
Print mini-project instructions to check off each step.
Andrew Ng's Coursera Machine Learning Course ↓
Need to Know: The Andrew Ng Coursera course is the most referenced online machine learning course. It covers a broad set of fundamental, evergreen topics with a strong focus in building mathematical intuition behind machine learning models. Also, one can submit assignments and earn a grade for free. If you want to earn a certificate, one can subscribe or apply for financial aid.
Loved:
This course has a high level of credibility.
Introduces all necessary machine learning terminology and jargon.
Contains a very classic machine learning education approach with a high level of math focus.
Quizzes interspersed in courses and after each lesson support understanding and overall learning.
The sessions for the course are flexible, the option to switch into a different section is always available.
Disliked:
The mathematic notation was hard to process at times.
The content felt a bit dated and non-pragmatic. For example, the main concentration was MATLAB and Octave versus more modern languages and resources.
Video quality was less than average and could use a refresh.
Lecturer:
Andrew Ng:
Adjunct Professor, Stanford University (focusing on AI, Machine Learning, and Deep Learning)
Co-founder of Coursera
Former head of Baidu AI Group
Founder and previous head of Google Brain (deep learning) project
Former Director of the Stanford AI Lab
Chairman of the board of Woebot (a machine learning bot that focuses on Cognitive Behavior Therapy)
Links:
https://www.coursera.org/learn/machine-learning/
Andrew Ng recently launched a new course (August 2017) called DeepLearning.ai, a ~15 week course containing five mini-courses ($49 USD per month to continue learning after trial period of 7 days ends).
Course: https://www.coursera.org/specializations/deep-learning
Course 1: Neural Networks and Deep Learning
Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Course 3: Structuring Machine Learning Projects
Course 4: Convolutional Neural Networks
Course 5: Sequence Models
Tips on Watching:
Be disciplined with setting aside timing (even if it's only 15 minutes a day) to help power through some of the more boring concepts.
Don't do this course first, because it's intimidating, requires a large time commitment, and isn't a very energizing experience.
Additional machine learning opportunities
iPullRank Machine Learning Guide ↓
Need to Know: A machine learning e-book targeted at marketers.
Loved:
Targeted at marketers and applied to organic search.
Covers a variety of machine learning topics.
Some good examples, including real-world blunders.
Gives some practical tools for non-data scientists (including: MonkeyLearn and Orange)
I found Orange to be a lot of fun. It struggled with larger datasets; however, it has a very visual interface that was more user-friendly and offers potential to show some pretty compelling stories.
Example: World Happiness Dataset by:
X-axis: Happiness Score
Y-axis: Economy
Color: Health
Disliked:
Potential to break up content more with relevant imagery -- the content was very dense.
Writers:
iPullRank Team (including Mike King):
Mike King has a few slide decks on the basics of machine learnings and AI
iPullRank has a few data scientists on staff
Links:
http://ipullrank.com/machine-learning-guide/
Tips on Reading:
Read chapters 1-6 and the rest depending upon personal interest.
Review Google PhD ↓
Need to Know: A two-hour presentation from Google's 2017 IO conference that walks through getting 99% accuracy on the MNIST dataset (a famous dataset containing a bunch of handwritten numbers, which the machine must learn to identify the numbers).
Loved:
This talk struck me as very modern, covering the cutting edge.
Found this to be very complementary to Fast.ai, as it covered similar topics (e.g. ReLu, CNNs, RNNs, etc.)
Amazing visuals that help to put everything into context.
Disliked:
The presentation is only a short conference solution and not a comprehensive view of machine learning.
Also, a passive form of learning.
Presenter:
Martin Görner:
Developer Relations, Google (since 2011)
Started Mobipocket, a startup that later became the software part of the Amazon Kindle and its mobile variants
Links:
Part 1 - https://www.youtube.com/watch?v=u4alGiomYP4
Part 2 - https://www.youtube.com/watch?v=fTUwdXUFfI8
Tips on Watching:
Google any concepts you're unfamiliar with.
Take your time with this one; 2 hours of screen time doesn't count all of the Googling and processing time for this one.
Caltech Machine Learning iTunes ↓
Need to Know: If math is your thing, this course does a stellar job of building the mathematic intuition behind many machine learning models. Dr. Abu-Mostafa is a raconteur, includes useful visualizations, relevant real-world examples, and compelling analogies.
Loved:
First and foremost, this is a real Caltech course, meaning it's not a watered-down version and contains fundamental concepts that are vital to understanding the mechanics of machine learning.
On iTunes, audio downloads are available, which can be useful for on-the-go learning.
Dr. Abu-Mostafa is a skilled speaker, making the 27 hours spent listening much easier!
Dr. Abu-Mostafa offers up some strong real-world examples and analogies which makes the content more relatable.
As an example, he asks students: "Why do I give you practice exams and not just give you the final exam?" as an illustration of why a testing set is useful. If he were to just give students the final, they would just memorize the answers (i.e., they would overfit to the data) and not genuinely learn the material. The final is a test to show how much students learn.
The last 1/2 hour of the class is always a Q&A, where students can ask questions. Their questions were useful to understanding the topic more in-depth.
The video and audio quality was strong throughout. There were a few times when I couldn't understand a question in the Q&A, but overall very strong.
This course is designed to build mathematical intuition of what's going on under the hood of specific machine learning models.
Caution: Dr. Abu-Mostafa uses mathematical notation, but it's different from Andrew Ng's (e.g., theta = w).
The final lecture was the most useful, as it pulled a lot of the conceptual puzzle pieces together. The course on neural networks was a close second!
Disliked:
Although it contains mostly evergreen content, being released in 2012, it could use a refresh.
Very passive form of learning, as it wasn't immediately actionable.
Lecturer:
Dr. Yaser S. Abu-Mostafa:
Professor of Electrical Engineering and Computer Science at the California Institute of Technology
Chairman of Machine Learning Consultants LLC
Serves on a number of scientific advisory boards
Has served as a technical consultant on machine learning for several companies (including Citibank).
Multiple articles in Scientific American
Links:
https://work.caltech.edu/telecourse.html
https://itunes.apple.com/us/course/machine-learning/id515364596
Tips on Watching:
Consider listening to the last lesson first, as it pulls together the course overall conceptually. The map of the course, below, was particularly useful to organizing the information taught in the courses.
Image source: http://work.caltech.edu/slides/slides18.pdf
"Pattern Recognition & Machine Learning" by Christopher Bishop ↓
Need to Know: This is a very popular college-level machine learning textbook. I've heard it likened to a bible for machine learning. However, after spending a month trying to tackle the first few chapters, I gave up. It was too much math and pre-requisites to tackle (even with a multitude of Google sessions).
Loved:
The text of choice for many major universities, so if you can make it through this text and understand all of the concepts, you're probably in a very good position.
I appreciated the history aside sections, where Bishop talked about influential people and their career accomplishments in statistics and machine learning.
Despite being a highly mathematically text, the textbook actually has some pretty visually intuitive imagery.
Disliked:
I couldn't make it through the text, which was a bit frustrating. The statistics and mathematical notation (which is probably very benign for a student in this topic) were too much for me.
The sunk cost was pretty high here (~$75).
Writer:
Christopher Bishop:
Laboratory Director at Microsoft Research Cambridge
Professor of Computer Science at the University of Edinburgh
Fellow of Darwin College, Cambridge
PhD in Theoretical Physics from the University of Edinburgh
Links:
https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=sr_1_2?ie=UTF8&qid=1516839475&sr=8-2&keywords=Pattern+Recognition+%26+Machine+Learning
Tips on Reading:
Don't start your machine learning journey with this book.
Get a friend in statistics to walk you through anything complicated (my plan is to get a mentor in statistics).
Consider taking a (free) online statistics course (Khan Academy and Udacity both have some great content on statistics, calculus, math, and data analysis).
Machine Learning: Hands-on for Developers and Technical Professionals ↓
Need to Know: A fun, non-intimidating end-to-end launching pad/whistle stop for machine learning in action.
Loved:
Talks about practical issues that many other sources didn't really address (e.g. data-cleansing).
Covered the basics of machine learning in a non-intimidating way.
Offers abridged, consolidated versions of the content.
Added fun anecdotes that makes it easier to read.
Overall the writer has a great sense of humor.
Writer talks to the reader as if they're a real human being (i.e., doesn't expect you to go out and do proofs; acknowledges the challenge of certain concepts).
Covers a wide variety of topics.
Because it was well-written, I flew through the book (even though it's about ~300 pages).
Disliked:
N/A
Writer:
Jason Bell:
Technical architect, lecturer, and startup consultant
Data Engineer at MastodonC
Former section editor for Java Developer's Journal
Former writer on IBM DeveloperWorks
Links:
https://www.amazon.com/Machine-Learning-Hands-Developers-Professionals/dp/1118889061
https://www.wiley.com/en-us/Machine+Learning%3A+Hands+On+for+Developers+and+Technical+Professionals-p-9781118889060
Jason's Blog: https://dataissexy.wordpress.com/
Tips on Reading:
Download and explore Weka's interface beforehand.
Give some of the exercises a shot.
Introduction to Machine Learning with Python: A Guide for Data Scientists ↓
Need to Know: This was a was a well-written piece on machine learning, making it a quick read.
Loved:
Quick, smooth read.
Easy-to-follow code examples.
The first few chapters served as a stellar introduction to the basics of machine learning.
Contain subtle jokes that add a bit of fun.
Tip to use the Python package manager Anaconda with Jupyter Notebooks was helpful.
Disliked:
Once again, installation was a challenge.
The "mglearn" utility library threw me for a loop. I had to reread the first few chapters before I figured out it was support for the book.
Although I liked the book, I didn't love it. Overall it just missed the "empowering" mark.
Writers:
Andreas C. Müller:
PhD in Computer Science
Lecturer at the Data Science Institute at Columbia University
Worked at the NYU Center for Data Science on open source and open science
Former Machine Learning Scientist at Amazon
Speaks often on Machine Learning and scikit-learn (a popular machine learning library)
And he makes some pretty incredibly useful graphics, such as this scikit-learn cheat sheet:
Image source: http://peekaboo-vision.blogspot.com/2013/01/machin...
Sarah Guido:
Former senior data scientist at Mashable
Lead data scientist at Bitly
2018 SciPy Conference Data Science track co-chair
Links:
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_7?s=books&ie=UTF8&qid=1516734322&sr=1-7&keywords=python+machine+learning
http://shop.oreilly.com/product/0636920030515.do
Tips on Reading:
Type out code examples.
Beware of the "mglearn" utility library.
Udacity: Machine Learning by Georgia Tech ↓
Need to Know: A mix between an online learning experience and a university machine learning teaching approach. The lecturers are fun, but the course still fell a bit short in terms of active learning.
Loved:
This class is offered as CS7641 at Georgia Tech, where it is a part of the Online Masters Degree. Although taking this course here will not earn credit towards the OMS degree, it's still a non-watered-down college teaching philosophy approach.
Covers a wide variety of topics, many of which reminded me of the Caltech course (including: VC Dimension versus Bayesian, Occam's razor, etc.)
Discusses Markov Decision Chains, which is something that didn't really come up in many other introductory machine learning course, but they are referenced within Google patents.
The lecturers have a great dynamic, are wicked smart, and displayed a great sense of (nerd) humor, which make the topics less intimidating.
The course has quizzes, which give the course a slight amount of interaction.
Disliked:
Some videos were very long, which made the content a bit harder to digest.
The course overall was very time consuming.
Despite the quizzes, the course was a very passive form of learning with no assignments and little coding.
Many videos started with a bunch of content already written out. Having the content written out was probably a big time-saver, but it was also a bit jarring for a viewer to see so much information all at once, while also trying to listen.
It's vital to pay very close attention to notation, which compounds in complexity quickly.
Tablet version didn't function flawlessly: some was missing content (which I had to mark down and review on a desktop), the app would crash randomly on the tablet, and sometimes the audio wouldn't start.
There were no subtitles available on tablet, which I found not only to be a major accessibility blunder, but also made it harder for me to process (since I'm not an audio learner).
Lecturer:
Michael Littman:
Professor of Computer Science at Brown University.
Was granted a patent for one of the earliest systems for Cross-language information retrieval
Perhaps the most interesting man in the world:
Been in two TEDx talks
How I Learned to Stop Worrying and Be Realistic About AI
A Cooperative Path to Artificial Intelligence
During his time at Duke, he worked on an automated crossword solver (PROVERB)
Has a Family Quartet
He has appeared in a TurboTax commercial
Charles Isbell:
Professor and Executive Associate Dean at School of Interactive Computing at Georgia Tech
Focus on statistical machine learning and "interactive" artificial intelligence.
Links:
https://www.udacity.com/course/machine-learning--ud262
Tips on Watching:
Pick specific topics of interest and focusing on those lessons.
Andrew Ng's Stanford's Machine Learning iTunes ↓
Need to Know: A non-watered-down Stanford course. It's outdated (filmed in 2008), video/audio are a bit poor, and most links online now point towards the Coursera course. Although the idea of watching a Stanford course was energizing for the first few courses, it became dreadfully boring. I made it to course six before calling it.
Loved:
Designed for students, so you know you're not missing out on anything.
This course provides a deeper study into the mathematical and theoretical foundation behind machine learning to the point that the students could create their own machine learning algorithms. This isn't necessarily very practical for the everyday machine learning user.
Has some powerful real-world examples (although they're outdated).
There is something about the kinesthetic nature of watching someone write information out. The blackboard writing helped me to process certain ideas.
Disliked:
Video and audio quality were pain to watch.
Many questions asked by students were hard to hear.
On-screen visuals range from hard to impossible to see.
Found myself counting minutes.
Dr. Ng mentions TA classes, supplementary learning, but these are not available online.
Sometimes the video showed students, which I felt was invasive.
Lecturer:
Andrew Ng (see above)
Links:
https://itunes.apple.com/us/course/machine-learning/id495053006
https://www.youtube.com/watch?v=UzxYlbK2c7E
Tips on Watching:
Only watch if you're looking to gain a deeper understanding of the math presented in the Coursera course.
Skip the first half of the first lecture, since it's mostly class logistics.
Additional Resources
Fast.ai (part 2) - free access to materials, cost for AWS EC2 instance
Deeplearning.ai - $50/month
Udacity Machine Learning Engineer Nanodegree - $1K
https://machinelearningmastery.com/
Motivations and inspiration
If you're wondering why I spent a year doing this, then I'm with you. I'm genuinely not sure why I set my sights on this project, much less why I followed through with it. I saw Mike King give a session on Machine Learning. I was caught off guard, since I knew nothing on the topic. It gave me a pesky, insatiable curiosity itch. It started with one course and then spiraled out of control. Eventually it transformed into an idea: a review guide on the most affordable and popular machine learning resources on the web (through the lens of a complete beginner). Hopefully you found it useful, or at least somewhat interesting. Be sure to share your thoughts or questions in the comments!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger https://ift.tt/2Il96GR via IFTTT
0 notes