Skip to main content

24 docs tagged with "cloud"

View all tags

Compress stored data

Storing uncompressed data wastes bandwidth and increases storage capacity requirements; applying appropriate compression reduces both storage consumption and the energy needed to read and write it.

Containerize your workloads

Containerizing workloads enables better resource utilisation and bin packing, reducing unnecessary compute allocation and embodied carbon compared to running full virtual machines.

Encrypt what is necessary

Data protection through encryption is a crucial aspect of our security measures. However, the encryption process can be resource-intensive at multiple levels.

Evaluate other CPU architectures

Applications are built with a software architecture that best fits the business need they are serving. Cloud providers make it easy to evaluate other CPU types

Implement stateless design

Service state refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk. Applications that consume large memory or on-disk data require larger VM sizes, especially for cloud computing where you would need larger VM SKUs to support high RAM capacity and multiple data disks.

Match your service level objectives to business needs

If service downtimes are acceptable it's better to not strive for highest availability but to design the solution according to real business needs. Lower availability guarantees can help reduce energy consumption by using less infrastructure components.

Minimize the total number of deployed environments

In a given application, there may be a need to utilize multiple environments in the application workflow. Typically, a development environment is used for regular updates, while staging or testing enviroments are used to make sure there are no issues before code reaches a production environment where users may have access. Each added environment has an increasing energy impact, which in turn creates more emissions. As such, it is important to understand the necessity of each enviroment and it's environmental impact.

Optimize impact on customer devices and equipment

Software that demands frequent hardware upgrades increases embodied carbon from device manufacturing; designing for backwards compatibility extends device lifetimes and reduces the carbon footprint of customer equipment.

Queue non-urgent processing requests

All systems have periods of peak and low load. From a hardware-efficiency perspective, we are more efficient with hardware if we minimise the impact of request spikes with an implementation that allows an even utilization of components. From an energy-efficiency perspective, we are more efficient with energy if we ensure that idle resources are kept to a minimum.

Reduce network traversal between VMs

Placing VMs in the same region or availability zone minimises the physical distance data must travel between instances, reducing the energy consumed by network traversal.

Remove unused assets

Unused cloud resources such as databases, storage buckets, and compute instances continue to consume energy and generate embodied carbon; identifying and decommissioning them eliminates unnecessary waste.

Scale down applications when not in use

Applications consume CPU even when they are not actively in use. For example, background timers, garbage collection, health checks, etc. Even when the application is shut down, the underlying hardware is consuming idle power.

Scale infrastructure with user load

Demand for resources depends on user load at any given time. However, most applications run without taking this into consideration. As a result,resources are underused and inefficient.

Scale Kubernetes workloads based on relevant demand metrics

By default, Kubernetes scales workloads based on CPU and RAM utilization. In practice, however, it's difficult to correlate your application's demand drivers with CPU and RAM utilization. Scaling your workload based on relevant demand metrics that drive scaling of your applications, such as HTTP requests, queue length, and cloud alerting events can help reduce resource utilization, and therefore also your carbon emissions.

Scale logical components independently

Decomposing applications into independently scalable microservices allows each component to be right-sized for its own demand, reducing overall compute resource consumption and embodied carbon.

Scan for vulnerabilities

Many attacks on cloud infrastructure seek to misuse deployed resources, which leads to an unnecessary spike in usage and cost.

Terminate TLS at border gateway

Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU usage and might be unnecessary in certain architectures.

Use a service mesh only if needed

Service meshes add overhead through additional containers and increased network traffic, so they should only be deployed for applications that genuinely require the capabilities they provide.

Use cloud native network security tools and controls

Network and web application firewalls provide protection against most common attacks and load shedding bad bots. These tools help to remove unnecessary data transmission and reduce the burden on the cloud infrastructure, while also using lower bandwidth and less infrastructure.

Use cloud native processor VMs

Cloud VMs built on energy-efficient processors, such as ARM-based cloud-native chips, can run scale-out workloads with significantly lower energy consumption and embodied carbon than traditional alternatives.

Use compiled languages

Interpreted languages re-parse and compile code on every execution, consuming more energy than pre-compiled binaries, which perform compilation once and run more efficiently at runtime.

Use DDoS protection

Distributed denial of service (DDoS) attacks are used to increase the server load so that it is unable to respond to any legitimate requests. This is usually done to harm the owner of the service or hardware.

Use serverless cloud services

Serverless cloud services scale dynamically with demand and share infrastructure across many applications, reducing idle resource consumption and lowering embodied carbon emissions.