Leverage pre-trained models and transfer learning for AI/ML development
As part of your AI/ML process, you should evaluate using a pre-trained model and use transfer learning to avoid training a new model from scratch.
As part of your AI/ML process, you should evaluate using a pre-trained model and use transfer learning to avoid training a new model from scratch.
Large-scale AI/ML models require significant storage space and take more resources to run as compared to optimized models. Modern model optimization techniques can dramatically reduce model size and inference costs while maintaining accuracy.
Data computation for ML workloads and ML inference is a significant contributor to the carbon footprint of the ML application. Also, if the ML model is running on the cloud, the data needs to be transferred and processed on the cloud to the required format that can be used by the ML model for inference.
Training an AI model implies a significant carbon footprint. The underlying framework used for the development, training, and deployment of AI/ML needs to be evaluated and considered to ensure the process is as energy efficient as possible.
Selecting the right hardware/VM instance types for AI/ML training and inference is critical for energy efficiency. The hardware landscape has evolved dramatically with specialized AI accelerators, GPUs, and custom silicon offering vastly different performance-per-watt characteristics.
Efficient storage formats for both training data and model artifacts are essential to reduce storage costs, network bandwidth, and computational overhead in AI/ML development pipelines.
Evaluate and use alternative, more energy efficient, models that provide similar functionality.
Depending on the model parameters and training iterations, training an AI/ML model consumes a lot of power and requires many servers which contribute to embodied emissions.