Examples of hyperparameters include the learning rate, batch size, number of hidden layers, and regularization strength (e.g., dropout rate). You set these hyperparameters to fixed value before training and they will affect model performance and generalization capability. So, you often experiment with different hyperparameters (hyperparameter tuning) to find good values for them. Hyperparameters contrast with model parameters that are updated during model training.
Model-centric experiment tracking frameworks, such as MLFlow, Weights & Biases, and Neptune.ai, help track the results of model training experiments with different combinations of hyperparameters (and different model architectures). The experimentation can be automated, in which case it is called AutoML.
Examples of hyperparameters include the learning rate, batch size, number of epochs, number of layers in a feedforward deep learning model, and the model architecture itself.