banner
News center
We offer a 24/7 online service to assist you.

I’ve spent 25+ years in the semiconductor industry. Here’s why I’m confident we can take on the A.I. challenge

Jul 23, 2023

We are headed toward a future where artificial intelligence (A.I.) plays a role in everything we do, for every person on the planet. That scale is incredibly exciting–but there are daunting challenges ahead, from the huge computing demands to security and privacy concerns. To solve them, we need to understand one fact: the path to A.I. at scale runs through our everyday devices.

Over the past few decades, our laptops, phones, and other devices have been the place where transformative technologies become tools that people trust and rely on. It’s about to happen again, but with greater impact than ever before: A.I. will transform, reshape, and restructure these experiences in a profound way.

While cloud-centric A.I. is impressive and here to stay, it faces limitations around latency, security, and costs. A.I. running locally can address all three areas. It brings A.I. into the applications we already use, where we already use them, all built right into the devices that we always have available.

However, as A.I. applications grow, we need to make sure our PCs, phones, and devices are A.I.-ready. That means designing traditional computing engines–the central processing unit (CPU) and graphics processing unit (GPU)–to run complex A.I. workloads, as well as creating new, dedicated A.I. engines like neural processing units (NPUs). Our industry is only at the beginning of a multi-year feedback loop where better A.I. hardware begets better A.I. software, which begets better A.I. hardware, which…you get the idea.

This is the future of A.I. at scale–and it also offers a roadmap to what’s next. From my nearly three decades of experience in the semiconductor industry, I see three enduring truths for how these kinds of shifts play out and how we can make the most of this moment.

Meaningful innovation starts with people’s daily needs. Think about the rise of Wi-Fi in the 2000s, the explosion of videoconferencing in the 2010s, or the more recent move to hybrid work. In each case, the industry had to figure out how technology could best fit into people’s lives. Useful applications fuel adoption and further advances until the new technology becomes indispensable.

We’re already beginning this process for A.I. on the PC. Microsoft is building A.I. into collaboration experiences for the 1.4 billion people using Windows. But in the near future, A.I. will integrate into hundreds of applications, and eventually thousands of applications that we aren’t even aware of yet. This will not only enhance existing experiences–it will elevate everything we do across work, creativity, and collaboration.

We must candidly discuss challenges to drive better results. That’s the only way to find the right solutions that address customer needs up and down the stack. For A.I., two core barriers are performance and security. Consider that GPT-3 is orders of magnitude larger than GPT-2, increasing from 1.5 billion parameters to 175 billion parameters. Now imagine those kinds of compute demands multiplied across every application, often running simultaneously. Only chips built for A.I. can make sure those experiences are fast, smooth, and power-efficient.

This is one of the most impactful inflection points for the semiconductor industry in decades. We must evolve the design of our hardware and create new, integrated A.I. accelerator engines to deliver A.I. capabilities at much lower power, with the right balance of platform power and performance. At the same time, we’ll need hardware-based security to protect the data and intellectual property that will run through A.I.

It takes an open ecosystem to create world-changing technology. We know that new innovations truly take off when put in the hands of manufacturers and developers. A great example is gaming. Gaming laptops with powerful CPUs and GPUs bring intensive computing, which game developers then use to create immersive visuals and engage in gameplay. It’s all part of a collaborative process to deliver on a common goal.

Secure, seamless A.I. will require solutions at every layer of the stack. We’ll need close collaboration to scale the hardware and the operating system, provide tools for developers to adopt, and enable manufacturers and partners to deliver new experiences. Only industry collaboration can move A.I. forward at scale, unleashing a feedback loop and ultimately creating a new generation of A.I.-enabled features and killer apps. The A.I. promise is real–but so are the challenges. The semiconductor industry is essential to designing and scaling solutions, just as it’s done for other seismic technology shifts in the past. To get there, we must surface and solve practical challenges, collaborate across disciplines, and work toward a shared vision for how A.I. can serve people’s needs. I’m confident our industry will rise to the challenge.

Michelle Johnston Holthaus is the executive VP and general manager of Intel’s Client Computing Group.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.