Types of NPU Processors and Supporting Architectures
A Neural Processing Unit (NPU) is a specialized hardware accelerator designed specifically to handle artificial intelligence (AI) and deep learning workloads efficiently. Unlike general-purpose processors, NPUs are optimized for tensor operations, matrix computations, and neural network inference, enabling faster and more energy-efficient AI processing in devices ranging from smartphones to data centers.
While NPUs excel at AI tasks, they often work in conjunction with other specialized processors such as GPUs, TPUs, and DSPs to deliver comprehensive computing performance. Below is a detailed breakdown of the key processor types that support or complement NPU functionality in modern computing systems.
Digital Signal Processor (DSP)
Designed for real-time signal processing, DSPs convert analog inputs into digital data and perform filtering, compression, and enhancement of audio, video, and speech signals.
Advantages
- Real-time processing capabilities
- Low power consumption
- High efficiency in audio and video encoding/decoding
- Widely used in mobile and embedded systems
Limitations
- Limited flexibility for general computing
- Not optimized for complex AI models
- Narrower instruction set compared to CPUs/GPUs
Best for: Voice assistants, noise cancellation, mobile multimedia, IoT sensors
Graphics Processing Unit (GPU)
Originally developed for rendering graphics, GPUs now play a crucial role in AI and machine learning due to their ability to perform massive parallel computations across thousands of cores.
Advantages
- Exceptional parallel processing power
- Widely supported by AI frameworks (e.g., TensorFlow, PyTorch)
- Ideal for training large neural networks
- High throughput for matrix operations
Limitations
- Higher power consumption than NPUs
- Less efficient for lightweight inference tasks
- Bulkier and more expensive for edge deployment
Best for: AI model training, gaming, 3D rendering, scientific computing
Tensor Processing Unit (TPU)
Developed by Google, TPUs are custom ASICs designed specifically for accelerating tensor operations used in machine learning. They outperform GPUs in certain AI inference and training scenarios.
Advantages
- Extremely fast matrix multiplication
- Optimized for TensorFlow workloads
- High energy efficiency in data centers
- Scalable for large-scale AI deployments
Limitations
- Limited availability outside Google Cloud
- Less flexible for non-tensor operations
- Not ideal for general-purpose computing
Best for: Large-scale AI training, cloud-based inference, research applications
Application Processor
The central brain of smartphones, tablets, and embedded systems, application processors manage operating systems, apps, memory, and connectivity. Offloading AI tasks to an NPU enhances overall system efficiency.
Advantages
- Handles general-purpose computing efficiently
- Manages OS and user applications seamlessly
- Integrated with multiple subsystems (camera, display, sensors)
- Enables multitasking and responsive UI
Limitations
- Inefficient for heavy AI workloads
- High power draw when running AI tasks
- Can become a bottleneck without NPU assistance
Best for: Mobile computing, app execution, system control, user interface management
Hybrid Processing Architecture
Hybrid processing leverages heterogeneous computing by combining multiple processor types—such as NPUs, GPUs, DSPs, and CPUs—into a unified system-on-chip (SoC) architecture. This enables intelligent workload distribution based on each processor's strengths.
Advantages
- Optimal performance through task-specific acceleration
- Improved energy efficiency and thermal management
- Seamless memory sharing and inter-processor communication
- Enables advanced AI features on edge devices (e.g., smartphones, drones)
Limitations
- Increased design complexity
- Requires sophisticated software optimization
- Higher development costs
Best for: AI-powered mobile devices, autonomous systems, smart cameras, AR/VR headsets
| Processor Type | Primary Function | AI Efficiency | Power Efficiency | Typical Use Cases |
|---|---|---|---|---|
| Digital Signal Processor (DSP) | Signal processing, audio/video filtering | Moderate | Excellent | Smartphones, wearables, voice assistants |
| Graphics Processing Unit (GPU) | Parallel computing, graphics rendering | High (especially for training) | Fair to Good | AI training, gaming, video editing |
| Tensor Processing Unit (TPU) | Matrix/tensor operations for ML | Very High | Very Good | Data centers, cloud AI, research |
| Application Processor | System control, OS and app execution | Low (without NPU) | Good | Mobile devices, embedded systems |
| Hybrid Processing | Integrated multi-processor workload distribution | Excellent | Excellent | Smartphones, edge AI, autonomous devices |
Expert Tip: In modern AI devices, the NPU doesn't work in isolation. It collaborates with DSPs for sensor data preprocessing, GPUs for heavy computation, and application processors for system integration—creating a balanced, high-performance, and energy-efficient ecosystem.
Functions and Features of NPU (Neural Processing Unit) Processors
The Neural Processing Unit (NPU) is a specialized core processor designed to accelerate artificial intelligence (AI) and machine learning (ML) workloads. Unlike general-purpose CPUs, NPUs are optimized for parallel computation and low-latency inference, making them ideal for AI-driven applications across smartphones, edge devices, data centers, and IoT systems. The exact functions and capabilities of an NPU vary depending on its design and target application—ranging from real-time image recognition in mobile devices to complex neural network inference in autonomous systems.
Core Functions of an NPU
NPUs perform a range of specialized tasks that offload intensive AI computations from the CPU and GPU, enhancing efficiency and performance. Below are the primary functions that define modern NPU operation:
Offloading CPU Tasks
An NPU acts as a dedicated processing core for AI-related network operations, freeing up the main CPU for general computing tasks. By handling neural network inference—such as executing deep learning models for voice assistants, facial recognition, or scene detection—the NPU significantly reduces CPU workload and system latency.
It implements efficient data flow architectures to ensure continuous movement of data through processing elements, minimizing bottlenecks and maximizing throughput for real-time AI applications.
High-Performance Computing & Processing
At the heart of an NPU lies a highly parallel array of mathematical units—including multipliers, adders, and vector processors—optimized for matrix operations and tensor computations used in neural networks.
These units enable rapid execution of convolutional layers, activation functions, and pooling operations essential in deep learning models. This specialized architecture allows NPUs to process thousands of operations per cycle, far exceeding traditional CPU performance for AI tasks.
Memory Control and Management
NPUs often include dedicated on-chip memory (such as SRAM or cache) to store weights, activations, and intermediate data during neural network inference. This localized memory control reduces reliance on slower external RAM, lowering power consumption and improving processing speed.
Efficient memory management ensures that data remains in close proximity to processing units, enabling continuous data streaming and minimizing idle cycles—a critical factor for maintaining high utilization in AI pipelines.
Data Path Management
An NPU intelligently manages internal data pathways to route information between processing elements, memory blocks, and I/O interfaces. This includes optimizing data access patterns, prioritizing high-bandwidth streams, and reducing contention in shared resources.
Advanced data path control enables features like data reuse, pipelining, and layer fusion, which collectively enhance computational efficiency and reduce energy consumption during model execution.
Timer and Event Management
NPUs incorporate precision timers and event schedulers to coordinate internal operations, synchronize data flow, and manage task deadlines. These mechanisms are crucial for real-time applications such as object tracking, speech processing, or sensor fusion in robotics and autonomous vehicles.
Event-driven execution allows the NPU to respond dynamically to input triggers, ensuring timely processing without constant polling or CPU intervention.
Multi-Core and Parallel Processing
Modern NPUs often feature multiple processing cores that operate in parallel, allowing simultaneous execution of different neural network layers or models. This multi-core architecture enhances scalability and throughput, especially in applications requiring concurrent AI tasks—like running face detection, voice commands, and gesture recognition at the same time.
Coherent inter-core communication and load balancing ensure optimal resource utilization and minimal processing delays.
Key Features of NPU Processors
Beyond core functions, NPUs are distinguished by advanced architectural features that enable adaptability, efficiency, and high performance in dynamic environments. These features make NPUs suitable for evolving AI workloads and diverse deployment scenarios.
| Function/Feature | Primary Benefit | Typical Use Cases |
|---|---|---|
| AI Task Offloading | Reduces CPU load and improves system responsiveness | Smartphones, smart assistants, edge AI devices |
| Parallel Math Processing | Accelerates neural network inference and training | Image recognition, NLP, autonomous systems |
| Dedicated Memory Control | Minimizes latency and power use | Real-time video analysis, on-device AI |
| High Bandwidth & Low Latency | Supports real-time AI decision-making | Autonomous vehicles, robotics, surveillance |
| Scalable Multi-Core Design | Enables concurrent AI workloads | AR/VR, multi-modal sensing, cloud inference |
Note: While NPUs offer significant advantages in AI acceleration, their effectiveness depends on software optimization, model compatibility, and proper integration within the system-on-chip (SoC) architecture. Developers should leverage vendor-specific SDKs and neural network compilers to fully utilize NPU capabilities. Additionally, power-efficient design and thermal management are crucial when deploying NPUs in compact or mobile devices.
Scenarios of NPU Processors: Powering the Future of AI
According to a leading emerging technology market advisory firm, an estimated 1.1 billion NPU (Neural Processing Unit) devices will be in use by 2030. This rapid growth is fueled by the transformative impact of NPUs on artificial intelligence applications across industries. Designed specifically for accelerating AI and machine learning workloads, NPUs offer superior efficiency, lower power consumption, and faster inference compared to traditional CPUs and GPUs. Below are key real-world scenarios where NPU technology is making a significant impact.
Gaming & Interactive Entertainment
NPUs are increasingly integrated into gaming platforms and cloud gaming infrastructure, functioning alongside or in place of traditional GPU-based AI accelerators. By operating under the AI processing umbrella, they enhance real-time decision-making, dynamic environment rendering, and player behavior prediction.
- Enable AI-driven NPCs (non-player characters) with adaptive behaviors and realistic responses
- Optimize server-side resource allocation in data centers for cloud gaming services
- Improve frame prediction and upscaling using on-device AI models (e.g., DLSS, FSR equivalents)
- Support real-time voice and gesture recognition for immersive AR/VR experiences
Key benefit: Enhanced user experience with minimal latency and reduced energy consumption.
Cybersecurity & Network Defense
In network infrastructure, NPUs—particularly those based on non-blocking switching architectures—play a critical role in securing data flows. Their ability to process packets at line speed enables real-time threat detection and mitigation.
- Accelerate deep packet inspection (DPI) for identifying malware, ransomware, and zero-day exploits
- Enable AI-powered intrusion detection systems (IDS) and intrusion prevention systems (IPS)
- Support intelligent traffic routing and load balancing in enterprise networks
- Facilitate real-time anomaly detection using behavioral AI models
Critical advantage: Faster response to cyber threats with minimal impact on network performance.
Automotive & Autonomous Transportation
NPUs are at the heart of advanced driver-assistance systems (ADAS) and self-driving vehicles. They process massive streams of sensor data in real time, enabling safe and intelligent navigation.
- Perform complex object detection (pedestrians, vehicles, traffic signs) using computer vision models
- Enable sensor fusion by combining inputs from cameras, LiDAR, radar, and ultrasonic sensors
- Support real-time path planning and obstacle avoidance algorithms
- Facilitate on-board decision-making without relying on cloud connectivity
Safety impact: Reduced reaction time and improved reliability in dynamic driving environments.
Smartphones & Mobile Devices
Modern smartphones embed NPUs directly into their system-on-chip (SoC) architecture to handle AI tasks efficiently, extending battery life while boosting performance.
- Accelerate voice assistants (e.g., Siri, Google Assistant) with on-device speech recognition
- Enable secure facial recognition and biometric authentication
- Enhance camera capabilities through AI-powered image processing (night mode, portrait blur, HDR)
- Optimize power management by predicting user behavior and app usage patterns
User benefit: Smoother, faster, and more private AI experiences without draining the battery.
Healthcare & Biomedical Innovation
NPUs are revolutionizing healthcare by accelerating AI models used in diagnostics, research, and patient monitoring. Their high efficiency makes them ideal for edge devices in clinical settings.
- Process biomedical signals such as EEG, ECG, and EMG in real time for early anomaly detection
- Enhance medical imaging with AI-powered segmentation and tumor detection in MRI/CT scans
- Accelerate drug discovery by simulating molecular interactions using deep learning models
- Support genomic analysis for personalized medicine and rare disease identification
Transformative potential: Faster diagnosis, reduced healthcare costs, and improved patient outcomes.
Industry Insight: As AI becomes ubiquitous, NPUs are shifting from niche accelerators to essential components across consumer electronics, industrial systems, and critical infrastructure. Their ability to perform on-device AI processing ensures data privacy, reduces latency, and lowers bandwidth requirements—key advantages in an increasingly connected world.
| Application Domain | Primary NPU Function | Key Benefits | Example Use Cases |
|---|---|---|---|
| Gaming & Entertainment | AI inference for graphics and behavior modeling | Low latency, immersive experiences | Smart NPCs, cloud gaming optimization |
| Cybersecurity | Real-time packet analysis and threat detection | High-speed processing, proactive defense | Intrusion prevention, network monitoring |
| Autonomous Vehicles | Sensor fusion and decision-making | Safety, reliability, real-time response | Self-driving cars, ADAS systems |
| Mobile Devices | On-device AI for personalization and security | Battery efficiency, privacy protection | Face unlock, voice assistants, photo enhancement |
| Healthcare | Medical AI model acceleration | Faster diagnosis, improved accuracy | Medical imaging, genomics, patient monitoring |
Emerging Trends & Future Outlook
- Edge AI Expansion: NPUs are enabling powerful AI at the edge, reducing dependence on cloud computing and improving data privacy.
- Energy Efficiency: With lower power consumption, NPUs are ideal for IoT devices and always-on applications.
- Customization: Chipmakers are developing domain-specific NPUs tailored for automotive, healthcare, or industrial AI workloads.
- Integration with 5G/6G: High-speed networks combined with on-device NPU processing will enable new applications in telemedicine, remote robotics, and smart cities.
- AI Democratization: As NPU-equipped devices become more affordable, advanced AI capabilities are becoming accessible to broader markets.
How to Choose NPU Processors: A Comprehensive Buyer’s Guide
Selecting the right Neural Processing Unit (NPU) is a critical decision for developers, engineers, and product designers integrating AI capabilities into modern devices. An NPU accelerates machine learning tasks such as inference and training, enabling smarter, faster, and more efficient applications across edge computing, mobile devices, IoT systems, and data centers. Choosing the right NPU requires a strategic evaluation of performance, power efficiency, software support, and use-case alignment to ensure long-term success and scalability.
Important Note: Unlike general-purpose CPUs or GPUs, NPUs are specialized for tensor operations and neural network workloads. Misalignment between NPU capabilities and application demands can lead to underperformance, excessive power consumption, or integration delays. Always validate compatibility with your AI framework and deployment environment.
Key Factors to Consider When Selecting an NPU
- Performance Requirements
Determine the computational intensity of your AI workload. High-performance applications such as real-time object detection, autonomous navigation, or large language model inference require NPUs with high TOPS (Tera Operations Per Second) ratings and low-latency response times.
- For lightweight models (e.g., keyword spotting, gesture recognition), a mid-range NPU with 1–4 TOPS may suffice.
- For complex models (e.g., YOLOv8, BERT, or multimodal AI), consider NPUs offering 10+ TOPS with support for parallel processing.
- Evaluate both peak performance and sustained performance under thermal constraints.
- Check if the NPU supports mixed-precision computing (INT8, FP16, BF16) to balance speed and accuracy.
- Power Efficiency
Power consumption is a decisive factor, especially in battery-powered or thermally constrained environments such as smartphones, drones, wearables, and edge sensors.
- Look for NPUs with high performance-per-watt metrics—ideally above 2 TOPS/W for edge devices.
- Assess idle power draw and dynamic voltage/frequency scaling (DVFS) capabilities.
- Consider NPUs with dedicated low-power inference modes for always-on AI features.
- Review thermal design power (TDP) and cooling requirements for embedded or enclosed systems.
- Scalability and Flexibility
Future-proof your design by selecting an NPU architecture that can adapt to evolving AI models and application needs.
- Choose scalable solutions that support multi-chip configurations or modular expansion.
- Ensure compatibility with variable model sizes—from compact MobileNet variants to large transformers.
- Verify programmability: Can the NPU handle custom layers or non-standard operations?
- Look for support for dynamic batching, sparse networks, and pruning to optimize model deployment.
- Software Ecosystem and Support
A powerful NPU is only as effective as its software stack. Robust tools streamline development, optimization, and deployment.
- Confirm support for major AI frameworks like TensorFlow Lite, PyTorch, ONNX, and OpenVINO.
- Check availability of model compilers, quantization tools, and profiling utilities.
- Evaluate the quality of SDKs, APIs, and documentation—especially for cross-platform integration.
- Look for active developer communities, regular firmware updates, and vendor technical support.
- Consider pre-optimized libraries for computer vision (e.g., CV-CUDA), audio processing, or NLP.
- Target Applications and Use Cases
Different NPUs are optimized for specific domains. Matching the NPU to your use case ensures optimal efficiency and performance.
- Edge AI: Prioritize low power, small footprint, and real-time inference (e.g., Google Edge TPU, Hailo-8).
- Computer Vision: Look for NPUs with hardware acceleration for convolutions and vision transformers.
- Natural Language Processing: Choose NPUs with strong matrix multiplication and attention-layer support.
- Robotics & Autonomous Systems: Require high throughput, low latency, and sensor fusion capabilities.
- Consumer Electronics: Balance performance with cost, size, and thermal output (e.g., Apple Neural Engine, Qualcomm Hexagon).
- Time to Market and Development Resources
Accelerate product development by choosing an NPU vendor that reduces integration complexity.
- Select vendors offering reference designs, evaluation kits, and sample code.
- Ensure comprehensive documentation, including datasheets, pinouts, and timing diagrams.
- Check for available AI model zoo or pre-trained models tailored to the NPU.
- Consider vendor partnerships with cloud platforms (e.g., AWS IoT, Azure Percept) for seamless deployment.
- Assess the learning curve for developers and availability of training materials.
| NPU Selection Criteria | Ideal For | Potential Trade-offs | Top Vendor Examples |
|---|---|---|---|
| High Performance (10+ TOPS) | Data centers, autonomous vehicles, industrial AI | Higher power draw, larger footprint | NVIDIA, Graphcore, Cerebras |
| Low Power & Small Form Factor | Wearables, IoT sensors, mobile devices | Limited model complexity support | Google Edge TPU, Syntiant, GreenWaves |
| Flexible Software Stack | R&D, prototyping, multi-framework projects | May require more tuning effort | Intel Movidius, Qualcomm Hexagon |
| High Scalability | Cloud inference, server farms, AI clusters | Increased system complexity | Graphcore, Groq, Tesla Dojo |
| Cost-Effective Integration | Mass-market consumer electronics | Lower peak performance | Rockchip, Amlogic, MediaTek |
Expert Tip: Before finalizing your NPU selection, request evaluation samples or access to cloud-based development platforms. Run benchmark tests using your actual AI models to measure real-world inference speed, power consumption, and accuracy. This hands-on validation can prevent costly redesigns later in the development cycle.
Additional Considerations for Long-Term Success
- Security Features: Look for NPUs with built-in support for secure boot, encrypted model storage, and trusted execution environments (TEE), especially for sensitive applications.
- AI Model Optimization: Choose NPUs that support pruning, quantization, and knowledge distillation to reduce model size without sacrificing accuracy.
- Vendor Roadmap: Review the manufacturer’s future product plans to ensure ongoing support and backward compatibility.
- Compliance and Certifications: Verify that the NPU meets industry standards (e.g., ISO 26262 for automotive, IEC 61508 for industrial).
- Total Cost of Ownership (TCO): Factor in not just the chip price, but also development tools, licensing fees, and long-term maintenance costs.
Selecting the right NPU is more than a technical decision—it's a strategic investment in your product’s intelligence, efficiency, and market competitiveness. By carefully evaluating performance, power, software support, and alignment with your target use case, you can choose an NPU that delivers exceptional AI capabilities today and scales effectively into the future. When in doubt, consult with the NPU vendor’s application engineers or leverage third-party benchmarking reports to make an informed, data-driven decision.
NPU Processor FAQ: Understanding Alibaba Cloud's AI Acceleration
Alibaba Cloud offers a diverse range of NPU-enabled instance types tailored for various artificial intelligence and machine learning workloads. These instances are built on virtualized NPU technology, allowing flexible allocation and scalable deployment as elastic cloud servers. The supported NPU instance families include:
- Elastic NPU (Artificial Intelligence): Designed for dynamic AI inference workloads with on-demand scalability.
- Edge NPU: Optimized for low-power, real-time AI processing at the network edge, ideal for IoT and smart devices.
- Low-Latency NPU: Engineered for time-sensitive applications requiring ultra-fast inference response, such as autonomous systems and real-time analytics.
- GPU-NPU Hybrid Instances: Combine GPU flexibility with NPU efficiency for mixed workloads involving both training and inference.
- Machine Learning NPU: Specialized for high-throughput inference tasks in deep learning models.
- Machine Learning NPU with Huge Memory: Supports memory-intensive AI models like large language models (LLMs) and complex neural networks.
- High Cost-Performance NPU: Offers an optimized balance of price and performance for scalable AI deployment in production environments.
All these instances run on Linux and are fully integrated into Alibaba Cloud’s ecosystem, supporting popular AI frameworks such as TensorFlow, PyTorch, and ONNX through optimized drivers and runtime environments.
While both NPUs (Neural Processing Units) and GPUs (Graphics Processing Units) are designed to accelerate computational workloads, they differ significantly in architecture, purpose, and efficiency:
| Feature | NPU | GPU |
|---|---|---|
| Primary Purpose | Optimized specifically for AI inference and neural network operations. | Originally designed for graphics rendering; now widely used for parallel computing including AI training. |
| Architecture Focus | Built for fixed-function matrix operations and low-precision arithmetic (e.g., INT8, FP16), enabling high efficiency in deep learning inference. | Highly parallel general-purpose processors with strong floating-point performance (FP32/FP64), suitable for complex computations. |
| Programmability | Less flexible but highly optimized for specific AI tasks; often uses dedicated compilers and frameworks. | Highly programmable using CUDA, OpenCL, or SYCL, making it adaptable to a wide range of scientific and AI workloads. |
| Power Efficiency | Superior energy efficiency for inference tasks due to specialized design. | Higher power consumption, especially under sustained load. |
| Use Case | Best suited for deploying trained models in production (e.g., image recognition, NLP). | Ideal for model training, simulation, rendering, and general-purpose parallel computing. |
It's important to note that TPUs (Tensor Processing Units), developed by Google, are often compared to NPUs. While TPUs excel in both training and inference, some hybrid systems may integrate NPU and TPU-like capabilities to provide redundancy and performance optimization. However, Alibaba Cloud’s NPU instances are distinct from TPUs and are independently optimized for cloud-based AI inference.
NPUs offer several compelling advantages for modern AI-driven applications:
- Accelerated AI Performance: NPUs are purpose-built to execute neural network operations at high speed, significantly reducing inference latency compared to CPUs or even GPUs in certain scenarios.
- Improved Power Efficiency: With optimized hardware for low-precision calculations, NPUs consume less power per inference, making them ideal for large-scale deployments and edge computing.
- CPU and GPU Offloading: By handling AI inference tasks, NPUs free up CPU resources for general computing and reduce GPU workload, leading to better system-wide performance and resource utilization.
- Cost Savings: Higher throughput and lower energy consumption translate into reduced operational costs, especially in data centers running thousands of inference requests.
- Scalability: In cloud environments like Alibaba Cloud, NPU instances can be easily scaled up or out based on demand, supporting elastic AI services.
- Deep Learning Optimization: NPUs feature architectures tailored for convolutional neural networks (CNNs), transformers, and other deep learning models, delivering superior performance in computer vision, natural language processing, and recommendation systems.
These benefits make NPUs a strategic choice for enterprises aiming to deploy efficient, responsive, and cost-effective AI solutions in production.
NPUs are specifically engineered for AI and machine learning inference tasks—operations that occur after a model has been trained. They excel in scenarios requiring fast, repeated execution of neural network predictions. Key application areas include:
- Computer Vision: Real-time image and video analysis for facial recognition, object detection, surveillance, and autonomous vehicles.
- Natural Language Processing (NLP): Running large language models (LLMs) for chatbots, translation, sentiment analysis, and text summarization with low latency.
- Speech Recognition and Synthesis: Enabling voice assistants, transcription services, and real-time voice-to-text applications.
- Recommendation Systems: Delivering personalized content or product suggestions in e-commerce, streaming platforms, and social media.
- Edge AI: Supporting intelligent devices like smart cameras, drones, and industrial sensors where low power and fast response are critical.
- Real-Time Analytics: Processing streaming data for anomaly detection, predictive maintenance, and fraud detection.
While NPUs are not typically used for model training (which demands higher flexibility and precision), they are unmatched in delivering high-throughput, energy-efficient inference at scale. When combined with cloud orchestration tools, NPU instances enable responsive, intelligent applications across industries—from healthcare and finance to retail and manufacturing.








浙公网安备
33010002000092号
浙B2-20120091-4
Comments
No comments yet. Why don't you start the discussion?