Can we build AI like a factory, 1 gigawatt a week? This piece maps the costs, power, labor, and who might win the race.
It explains what it would take to build huge AI data centers fast and at scale. It shows the real limits like money, parts, power, and people.
The authors ask if Sam Altman’s dream of a gigawatt per week of AI capacity can happen. They find chip demand is so strong that even the giant cost of new fabs looks small next to what buyers will pay. The bigger choke points may be upstream parts for data centers, like turbines, transformers, copper, and switchgear, plus the skilled workers to build and run everything.
Power and time beat everything. Chips are pricey and get replaced every three years, so you cannot afford empty buildings. Energy sources with short lead times win, which makes natural gas attractive now. Solar is cheap per panel but needs lots of land, extra capacity, and batteries to keep chips on 24 or 7. Some data centers may go off grid to skip long grid hookups. The future might split between many 100 MW sites that soak up spare grid power or a few mega sites at 1 to 10 GW that stamp out modular halls like factory parts.
The authors also look at money and geopolitics. If growth keeps up, $400B a year in US AI spending could be matched by revenue later, and $400B plus ARR by decade end seems possible. China could gain in a long timeline world since it leads in many non chip parts and builds power fast. The piece closes with two paths: AI winter with a slowdown, or AI boom with factory scale buildouts and possibly 1 GW per week by the mid 2030s.