May 19, 2020
StatQuest with Josh Starmer
This video covers all kinds of extra optimizations that XGBoost uses when the training dataset is huge. So we'll talk about the Approximate Greedy Algorithm, Parallel Learning, The Weighted Quantile Sketch, Sparsity-Aware Split Finding (i.e. how XGBoost deals with missing data and uses default paths), Cache-Aware Access and Blocks for Out-of-Core Computation. That's a lot of stuff, but we'll go through it step-by-step and it will be a whole lot of fun. :) NOTE: This StatQuest assumes that you are already familiar with... XGBoost Part 1: XGBoost Trees for Regression:
XGBoost Part 2: XGBoost Trees for Classification:
Quantiles and Percentiles:
For a complete index of all the StatQuest videos, check out:
If you'd like to support StatQuest, please consider... Patreon:
...or... YouTube Membership:
... ...a cool StatQuest t-shirt or sweatshirt (USA/Europe):
... ...buying one or two of my songs (or go large and get a whole album!)
...or just donating to StatQuest!
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
Category: Work Skills
📝 No reactions yet
Be the first one to share your thoughts!
Keno Fischer Reveals the Key to Julia Computing's Success
CrossMinds Editorial Team
| May 22, 2020
Crossminds Chrome Extension
Watch videos on arXiv
Subscribe to Our Newsletter
AI Startup Jobs
Terms & Conditions
© Provided by CrossCircles Inc.