Normalisation
Explore how normalization improves [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) performance. Learn scaling techniques like Min-Max and Z-score to optimize [YOLO26](https://docs.ultralytics.com/models/yolo26/) on the [Ultralytics Platform](https://platform.ultralytics.com).
Normalization is a fundamental technique in
data preprocessing that involves rescaling
numeric attributes to a standard range. In the context of
machine learning (ML), datasets often contain
features with varying scales—such as age ranges (0–100) versus income levels (0–100,000). If left untreated, these
disparities can cause the
optimization algorithm to become biased
toward larger values, leading to slower convergence and suboptimal performance. By normalizing data, engineers ensure
that every feature contributes proportionately to the final result, allowing
neural networks to learn more efficiently.
Techniques de normalisation courantes
There are several standard methods for transforming data, each suited for different distributions and algorithm
requirements.
-
Min-Max Scaling:
This is the most intuitive form of normalization. It rescales the data to a fixed range, usually [0, 1]. This
transformation is performed by subtracting the minimum value and dividing by the range (maximum minus minimum). It
is widely used in
image processing
where pixel intensities are known to be bounded between 0 and 255.
-
Z-Score Standardization: While often
used interchangeably with normalization, standardization specifically transforms data to have a mean of 0 and a
standard deviation of 1. This is particularly useful when the data follows a
Gaussian distribution and is essential for
algorithms like
Support Vector Machines (SVM) that
assume normally distributed data.
-
Log Scaling:
For data containing extreme outliers or following a power law, applying a logarithmic transformation can compress
the range of values. This makes the distribution more manageable for the
inference engine to interpret effectively
without being skewed by massive value spikes.
Applications concrètes
La normalisation est une étape standard dans les pipelines des systèmes d'IA haute performance dans divers secteurs.
-
Computer Vision (CV): In tasks such as
object detection and
image classification, digital images are
composed of pixel values ranging from 0 to 255. Feeding these large integers directly into a network can slow down
gradient descent. A standard preprocessing step
involves dividing pixel values by 255.0 to normalize them to the [0, 1] range. This practice ensures consistent
inputs for advanced models like YOLO26, improving training
stability on the Ultralytics Platform.
-
Analyse d'images médicales : les scans médicaux, tels que ceux utilisés dans l'
IA dans le domaine de la santé, proviennent souvent de
machines différentes avec des échelles d'intensité variables. La normalisation garantit que les intensités des pixels d'une IRM ou d'un scanner
sont comparables entre différents patients et équipements. Cette cohérence est essentielle pour une détection précise des
tumeurs,
permettant au modèle de se concentrer sur les anomalies structurelles plutôt que sur les variations de luminosité.
Distinguer les concepts apparentés
It is important to differentiate normalization from similar preprocessing and architectural terms found in deep
learning.
-
vs. Batch Normalization:
Data normalization is a preprocessing step applied to the raw input dataset before it enters the network.
Conversely, Batch Normalization operates internally between layers throughout the network during
model training. It normalizes the output of a previous
activation layer to stabilize the learning process.
-
vs. Image Augmentation:
While normalization changes the scale of the pixel values, augmentation changes the content or
geometry of the image (e.g., flipping, rotating, or changing colors) to increase dataset diversity. Tools like
Albumentations are used for augmentation,
whereas normalization is a mathematical scaling operation.
Exemple de mise en œuvre
In computer vision, normalization is often the first step in the pipeline. The following
Python example demonstrates how to manually normalize image data using the
NumPy library, a process that happens automatically within the
Ultralytics YOLO26 data loader during training.
import numpy as np
# Simulate a 2x2 pixel image with values ranging from 0 to 255
raw_image = np.array([[0, 255], [127, 64]], dtype=np.float32)
# Apply Min-Max normalization to scale values to [0, 1]
# This standardizes the input for the neural network
normalized_image = raw_image / 255.0
print(f"Original Range: {raw_image.min()} - {raw_image.max()}")
print(f"Normalized Range: {normalized_image.min()} - {normalized_image.max()}")