Skip to main content

8 docs tagged with "ai"

View all tags

Optimize the size of AI/ML models

Large-scale AI/ML models require significant storage space and take more resources to run as compared to optimized models. Modern model optimization techniques can dramatically reduce model size and inference costs while maintaining accuracy.

Run AI models at the edge

Data computation for ML workloads and ML inference is a significant contributor to the carbon footprint of the ML application. Also, if the ML model is running on the cloud, the data needs to be transferred and processed on the cloud to the required format that can be used by the ML model for inference.

Select a more energy efficient AI/ML framework

Training an AI model implies a significant carbon footprint. The underlying framework used for the development, training, and deployment of AI/ML needs to be evaluated and considered to ensure the process is as energy efficient as possible.

Select the right hardware/VM instance types for AI/ML training

Selecting the right hardware/VM instance types for AI/ML training and inference is critical for energy efficiency. The hardware landscape has evolved dramatically with specialized AI accelerators, GPUs, and custom silicon offering vastly different performance-per-watt characteristics.

Use sustainable regions for AI/ML training

Depending on the model parameters and training iterations, training an AI/ML model consumes a lot of power and requires many servers which contribute to embodied emissions.