Basics of Python NumPy for Machine Learning and Deep Learning
Hi there! 👋
I’m Dhyuthidhar, and if you’re new here, welcome! I love exploring computer science topics, especially machine learning, and breaking them down into easy-to-understand concepts. Today, let’s dive into the fundamentals of Python’s NumPy library and its importance in ML and DL. 🚀
Why NumPy?
NumPy is a powerful Python library for numerical computing, widely used in machine learning and deep learning. Efficient computations are crucial for ML models since they often iterate over large datasets. NumPy’s vectorized operations significantly outperform traditional loops, making computations faster and more efficient.
After reading this blog, you will:
Understand essential NumPy functions for ML/DL.
Learn about vectorized operations and broadcasting.
Get familiar with matrix and vector manipulations using NumPy.
Know how to implement functions like sigmoid, normalization, and reshaping using NumPy.
Let’s get started! 🚀
NumPy Functions in Machine Learning
Sigmoid Function
The sigmoid function is commonly used in logistic regression and neural networks. Its mathematical formula is:
$$\sigma(x) = \frac{1}{1 + e^{-x}}$$
Let’s first implement it using Python’s built-in math
module:
import math
def basic_sigmoid(x):
return 1 / (1 + math.exp(-x))
print("basic_sigmoid(1) =", basic_sigmoid(1))
This works well for single values but fails when we pass an array, which is essential in ML.
Now, let’s implement the same function using NumPy:
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
t_x = np.array([1, 2, 3])
print("sigmoid(t_x) =", sigmoid(t_x))
NumPy automatically applies np.exp()
element-wise, making operations more efficient.
Sigmoid Gradient
The gradient (derivative) of the sigmoid function is crucial for gradient descent optimization. The derivative is given by:
$$\sigma'(x) = \sigma(x)(1 - \sigma(x))$$
Here’s the implementation:
def sigmoid_derivative(x):
s = sigmoid(x)
return s * (1 - s)
print("sigmoid_derivative(1) =", sigmoid_derivative(1))
print("sigmoid_derivative(np.array([1, 2, 3])) =", sigmoid_derivative(np.array([1,2,3])))
Matrix Manipulations with np.shape()
and np.reshape()
Two commonly used functions in ML/DL are:
np.shape()
: Returns the shape of an array.np.reshape()
: Reshapes arrays without changing data.
Image to Vector Conversion
In deep learning, images are often reshaped into 1D vectors before feeding them into neural networks. Let’s convert a (3,3,2) image matrix into a vector:
def image2vector(image):
return image.reshape(image.shape[0] * image.shape[1] * image.shape[2], 1)
Normalization
Normalization scales data so that each feature has a similar range, improving gradient descent convergence. We normalize a matrix by dividing each row by its norm:
def normalize_rows(x):
norm = np.linalg.norm(x, axis=1, keepdims=True)
return x / norm
x = np.array([[0., 3., 4.], [1., 6., 4.]])
print("normalizeRows(x) =", normalize_rows(x))
Understanding Broadcasting
NumPy allows operations between arrays of different shapes through broadcasting. For example, in the normalization function, a (2,1) norm matrix divides a (2,3) matrix without looping.
Key Takeaways
✔ NumPy’s vectorized operations make ML computations efficient. ✔ The sigmoid function and its derivative are crucial in ML/DL models. ✔ Matrix reshaping and normalization are key preprocessing steps. ✔ Broadcasting simplifies matrix operations without explicit loops.
This is Part 1 of the Basics of NumPy for ML/DL series. Stay tuned for Part 2, where we’ll explore softmax and loss functions using NumPy!
Let me know in the comments what topic you’d like me to cover next! 👇
References
Deep Learning Neural Network Course by DeepLearning.AI
GitHub Repository for the Jupyter Notebook with code snippets.
Happy coding! 😊