Tony Stewart
, July 12, 2024
Share this article

When we discuss the importance of Copilots and AI, we typically correlate the performance of software and applications as crucial drivers for productivity, creativity, and innovation. The incredible ability of AI systems to process vast amounts of data, make accurate predictions, and automate complex tasks has transformed industries and everyday life. Previously, many AI models were required to run in the cloud, but as we move toward a future defined by on-device generative AI processing, it becomes essential to understand the native computation that runs AI models. As AI becomes more integrated into our workflows, the demand for more powerful and efficient AI-capable hardware will continue to grow.

Introducing Copilot+ PCs

Most recently, at the Microsoft Build Conference in May, a groundbreaking development was unveiled to the masses: a new line of Windows PCs, known as Copilots+ PCs, designed specifically to optimize AI performance, use, and creation. This announcement marks a significant step forward in the evolution of personal computing, introducing hardware that is purpose-built to enhance the capabilities of AI and machine learning applications.

The game-changing Neural Processing Unit (NPU)

Central to the innovation of Copilots+ PCs is the introduction of the Neural Processing Unit (NPU) silicon chip. This specialized processor is engineered to handle AI-specific tasks more efficiently than traditional CPUs or GPUs. By focusing on AI workloads, the NPU delivers superior performance and efficiency in real-time – which includes improved GPT modalities such as video and audio generation, voice commands and accelerated image generation – setting a new standard for AI computing.

How does the NPU work alongside the CPU or GPU?

The NPU in Copilot+ PCs works in tandem with the CPU and GPU to optimize overall system performance. While the CPU handles general computing tasks and the GPU excels at rendering graphics, the NPU is dedicated to processing AI algorithms. This division of labor ensures that each component operates at peak efficiency, enhancing the performance of AI applications. TOPS (Trillions or Tera Operations Per Second) is a key metric used to measure the performance of NPUs, providing a standardized way to compare AI capabilities across different processors and architectures.

TOPS: the metric for AI performance evaluation

TOPS is a crucial metric that measures the computational power of AI processing units. It measures the maximum number of operations a processor can perform per second, highlighting the efficiency and speed of AI inferencing. While GPUs may boast a higher range of TOPS, typically between 600 and 1300, they are designed for a broader array of tasks. In contrast, NPUs, with TOPS ranging from 40 to 45, are specialized for specific AI operations such as deep learning and matrix multiplication. Despite their lower TOPS, NPUs excel in these areas, offering significantly higher efficiency. Understanding TOPS is essential for evaluating and comparing the performance of NPUs, as it directly impacts the capabilities of AI-driven applications.

Unparalleled implementation of apps on the fastest chip

Copilot+ PCs leverage the power of NPUs to run a wide array of applications natively at incredible speeds. This includes Microsoft 365 apps like Teams, PowerPoint, Outlook, Word, Excel, OneDrive, and OneNote, as well as other popular software such as Chrome, Spotify, Zoom, WhatsApp, Adobe Photoshop, Adobe Lightroom, Blender, Affinity Suite, and DaVinci Resolve. The integration of NPUs ensures that these applications perform efficiently, providing users with an integrated and responsive experience.

Leveraging powerful processors and state-of-the-art AI models

Copilots+ PCs utilize powerful processors and multiple state-of-the-art AI models, including several of Microsoft’s world-class Small Language Models (SLMs).    These models are designed to handle specific tasks with high accuracy and efficiency, complementing the broader capabilities of Large Language Models (LLMs). The combination of these AI models enhances the overall performance and versatility of the Copilot+ PCs.

SLMs vs. LLMs

SLMs are fundamentally the same as large language models, as they understand and generate language, but at a much smaller scale and are comparably less complex. The scale is usually in the order of millions or tens of millions of parameters, which is still enormous!

In contrast, LLMs scale to the order of billions or trillions, having much broader capabilities and being able to understand and generate human-like text across a wide range of topics. While LLMs provide extensive versatility, SLMs excel in specialized applications, making them ideal for targeted AI-driven tasks.

Recall capabilities that revolutionize work

One of the most innovative features of Copilot+ PCs is the Recall capability. This function addresses the common issue of finding files or folders we know we have seen or used previously. Traditionally, this involves remembering file locations, websites, or scrolling through numerous emails. With Recall, you can access virtually anything you have seen or done on your PC as if you had an eidetic memory. Copilot+ PCs organize information based on relationships and associations unique to each user, helping you find what you’re looking for quickly and intuitively by piecing together the path to rediscovering your important files and folders. Due to recent concerns over the privacy of using Recall, Microsoft has decided to shift the feature from its broadly available preview to conduct further testing within its Windows Insider Program (WIP) to gather analytics to improve security and vulnerability concerns.

Advancing AI responsibly

As Microsoft continues to innovate with AI, responsible advancement remains a core priority. This involves ensuring that AI technologies are developed and deployed ethically, with a focus on transparency, fairness, and accountability. They are committed to creating AI systems that respect user privacy, promote inclusivity, and mitigate potential biases, ensuring that the benefits of AI are accessible to all.

Stay ahead of the Copilot wave  

From the discussion of powerful new NPUs to advanced AI models, and finally to innovative and powerful features such as Recall, it is critical to understand the need to identify what is necessary for streamlining Copilot on both the software and hardware layers. Here at Alithya, we understand the complexities of implementing Copilot at any stage, and as such, we are here to guide you and your organization to secure success with your AI investments.

As a global provider of learning services and solutions in the digital space with over 30 years of experience, we are committed to helping your business and organization flourish with the analytical, generative, and automative potential of Copilot. In this endeavor, Alithya has introduced a learning series called Copilot Academy for the purposes of engaging in educational aspects of how to leverage Copilot to enhance productivity, automate tasks, and make informed decisions faster. Whether you are looking to optimize your workflow, boost team efficiency, or explore innovative use cases, our academy offers all the tools and knowledge you need in just 4 hours.

Sessions are held every Wednesday from 1pm to 5pm. Don’t miss out on this opportunity to transform the way you work by ensuring that these Copilot can truly benefit your organization in meaningful and productive ways. Register today!

Share this article