No, because one particular data centre can only be part of the puzzle, never be environmentally friendly just by itself.
A single location can only:
- optimize how much extra waste heat they produce, and how to reclaim some of that and
- promise to somehow make up for less environmentally friendly resources spent.
You, in how your manage your systems across multiple locations, can do so much more more than that.
Imagine two data centres, 400km apart, both preferentially powered by wind turbines. Which one is more environmentally friendly? The answer lies in the wind: Today, the one with useable winds nearby operates on a lower environmental footprint.
The most environmentally friendly computer is one never produced and the second best is the one never turned on, so if you do turn one on, make sure you allow your workload to be executed wherever sustainable energy production&use is easiest to attain right now.
Shortcut: execute regular-yet-not-time-critical in the cheapest place fulfilling your requirements. We are starting to see automation for that to become reasonably achievable at least within the big cloud providers.
Over time, energy production dynamics are more directly exposed to energy buyers, and we can expect that to be echoed in CPU spot price sooner or later. And as more and more environmental costs become visible to end customers, anyone who can distribute his processing needs can also make use of this trend, "simply" by allowing price decisions influence infrastructure choices.
This works as soon as sustainable energy is at least on regular occasion cheaper than fossil sources (in many places, that already happened) and keep working at least until we solved that mass energy storage challenge (which appears to be hard).
TL;DR: invest in provider-agnostic preemption & auto-scaling automation