March 2025 - Digital Infrastructure | Interconnection | Data Center

Crafting a Winning AI Infrastructure Strategy - The Role of Data Center Neutrality

Unlock AI success with a strategic infrastructure approach. Ivo Ivanov, CEO of DE-CIX, explores the vital role of data center neutrality, interconnection, and low-latency solutions for future-proof AI strategies.

Crafting a Winning AI Infrastructure Strategy - The Role of Data Center Neutrality-web

©Galeanu Mihai | istockphoto.com

As artificial intelligence (AI) becomes an essential driver of business success, simply adopting AI is no longer enough. Companies must also ensure their AI infrastructure is faster, more secure, and more efficient than their competition. According to an MIT Technology Review survey from 2024, 95% of enterprises already leverage AI, and half aim for full-scale integration within two years. However, scaling AI requires a robust IT infrastructure capable of handling vast data volumes and high-performance demands.

One of the challenges an enterprise encounters on the journey to becoming AI-ready is future-proofing their internal and external IT infrastructure to manage the masses of data and the performance demands of AI. The performance demands are unparalleled, with AI training pushing the limits of what’s possible in computing power, and AI inference demanding the lowest of latencies as we move towards the concept of Zero Latency.

 

How to leverage data centers for AI success

A strategic mix of hyperscale, colocation, and on-premise data centers is crucial for AI operations. For the moment, cloud players and AI as a Service operators – with their hyperscale and high-performance computing (HPC) data centers, often in remote locations – offer perhaps the best option for training AI models. While standard multi-tenant colocation facilities may not always have the capacity to support AI model training, they play a critical role in AI inference – where real-time AI processing occurs. These data centers are close to the edge, to people and businesses, and geographically dispersed, making them an excellent choice for storing AI agents for low-latency AI inference.

But as we head towards a world where Zero Latency is becoming a prerequisite for success, and if we consider products and services being rolled out regionally or globally, it is clear that any one centralized data center facility will not suffice to enable AI inference to function with the agility expected. A new approach to infrastructure is required – one that ensures the lowest latency with hyper-local set-ups, and integrates regional, pan-regional, and global infrastructure as needed.

The solution? Interconnected data centers

By taking a provider-neutral approach and combining facilities from different operators with judicious use of the cloud and on-premise infrastructure, it is possible to create the coverage required to ensure the lowest latency everywhere – in every city, in every region where a company is offering their AI-powered services to customers.

For this to be successful, companies need to leverage distributed, data center and carrier neutral Internet Exchanges (IXs) and also take a provider-neutral approach to data transport networks, so that businesses can interconnect infrastructure across multiple locations while maintaining low-latency, high-performance AI operations. 

With the addition of a virtual cloud routing solution, space in private and public clouds, on-premise data centers, and colocation facilities can be harmonized into a single virtual infrastructure environment, interoperable and seamless.

How to optimize cloud connectivity for AI

AI relies on vast datasets and computational power that often exceed on-premise capabilities, making cloud-based solutions essential. AI models, particularly in training phases, require hyperscalers and AI as a Service providers with the power density to support advanced GPUs. However, many companies still rely on the public Internet for cloud access, which introduces security, latency, and compliance challenges.

Direct interconnection platforms with Cloud and AI Exchange capabilities provide the solution to this challenge. These platforms offer secure and resilient SLA-backed connectivity to major cloud providers, enhancing the performance and value of any cloud projects, not least AI projects. They also provide flexible and scalable connectivity, allowing a company to scale up capacity on demand for the masses of data required for AI training, and scale it back down again when completed. Furthermore, connecting directly to the cloud providers’ private connectivity solution in this way also results in cost savings on cloud egress fees. Finally, a Cloud Exchange with cloud routing enabled allows a company to optimize their multi-cloud strategies, reducing vendor lock-in and enabling businesses to select the best services across providers. For companies operating across multiple locations, geographical cloud diversity ensures lower latency and improved performance.

Harness the power of interconnection

Interconnection is the backbone of a future-ready AI strategy. Enterprise-grade interconnection services utilizing data center and carrier neutral Internet Exchanges (IXs) solve critical AI infrastructure challenges by:

  • Enabling a multi-provider and geo-redundant approach to infrastructure
  • Enhancing security and control over data flows
  • Reducing network latency through direct and dedicated connectivity
  • Facilitating seamless multi-cloud and multi-location strategies for improved resilience and Zero Latency coverage

Neither clouds nor data center facilities can function effectively without connectivity, making Cloud and AI Exchanges an essential foundation for high-performance, high-value AI projects. Whether we look at the training phase or at the long-term use of AI inference, ensuring efficient, seamless, secure, and resilient data transmission is key to success.

A future-proof AI strategy

As part of a company’s AI infrastructure strategy, a well-structured interconnection strategy gives companies the agility and the coverage to meet their customers’ needs, wherever they are. By leveraging neutral IXs, Cloud Exchange platforms, AI Exchanges, and interconnected data centers, businesses can secure a high-performance, scalable AI ecosystem – and position themselves ahead of the competition.

 

Ivo Ivanov has been Chief Executive Officer at DE-CIX and Chair of the Board of the DE-CIX Group AG since 2022. Prior to this, Ivanov was Chief Operating Officer of DE-CIX and Chief Executive Officer of DE-CIX International, responsible for the global business activities of the leading Internet Exchange operator in the world. He has more than 20 years of experience in the regulatory, legal and commercial Internet environment. Ranked as one of the top 100 most influential professionals of the Telecom industry (Capacity Magazine’s Power 100 listing, 2021/2022/2023/2024), Ivo is regularly invited to share his vision and thought leadership in various industry-leading conferences around the globe.