My Notebook
  • Introduction
  • Tutorials
    • SQL
      • Basics
      • Advanced
    • Python
      • Basics
      • Intermediate
      • Advanced
      • Visualization
      • Important Libraries
        • SpaCy
        • NumPy
        • Pandas
        • Scikit-learn
    • Git & GitHub
      • Linux | Shell Basics
      • Git Commands & GitHub
    • Deep Learning
    • Machine Learning
  • Book Notes
    • Python DS Handbook
      • 1. Jupyter Notebook (IPython)
      • 2. Introduction to NumPy
      • 3. Data Manipulation - Pandas
      • 4. Visualization - Matplotlib
      • 5. Machine Learning
    • Hands-on Machine Learning
      • I. The Fundamentals - ML
        • 1. The ML Landscape
        • 2. End-to-End ML Project
        • 3. Classification
        • 4. Training Models
        • 5. SVM
        • 6. Decision Trees
        • 7. Ensemble Learning & Random Forests
        • 8. Dimension Reduction
        • 9. Unsupervised Learning
      • II. Neural Networks - DL
    • Deep Learning with Python
      • 1. Fundamentals of DL
      • 2. Deep Learning in Practice
    • Elements of Statistical Learning
    • Math for Machine Learning
      • I. Mathematical Foundations
      • II. Central ML Problems
    • Deep Reinforcement Learning
      • I. Tabular Solution Methods
      • II. Approx. Solution Methods
      • III. Looking Deeper
      • IV. Deep RL
  • Advanced Topics
    • TensorFlow & Keras
    • Unit Tests & Refactoring
    • Data Structures & Algorithms
  • Miscellaneous
    • Coding Practice
      • LeetCode
      • HackerRank
      • ProjectEuler
Powered by GitBook
On this page

Was this helpful?

  1. Book Notes
  2. Hands-on Machine Learning

I. The Fundamentals - ML

The first part of the book covers the following topics:

  • What Machine Learning is, what problems it tries to solve, and the main categories and fundamental concepts of its systems

  • The steps in a typical Machine Learning project

  • Learning by fitting a model to data

  • Optimizing a cost function

  • Handling, cleaning and preparing data

  • Selecting and engineering features

  • Selecting a model and tuning hyperparameters using cross-validation

  • The challenges of Machine Learning, in particular, underfitting and overfitting (the bias/variance trade-off)

  • The most common learning algorithms: Linear and Polynomial Regression, Logistic Regression, k-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forests, and Ensemble methods

  • Reducing the dimensionality of the training data to fight the “curse of dimensionality”

  • Other unsupervised learning techniques, including clustering, density estimation, and anomaly detection

PreviousHands-on Machine LearningNext1. The ML Landscape

Last updated 4 years ago

Was this helpful?