Amid the AI transformation wave, the crux of Edge AI is no longer raw compute alone. An application-first mindset now prevails—solving real pain points while optimizing cost, power, and response time.
According to , the Edge AI market is projected to grow from roughly USD 24.05 billion in 2024 to USD 356.84 billion by 2035, a ~27.8% CAGR.
DFI recently joined Embedded Computing Design’s online roundtable, “Roundtable Event: Navigating AI at the Edge – A Multi-Stakeholder Technical Deep Dive.” Together with multiple companies, the discussion dove into the latest technologies and use cases in Edge AI—spanning applications, business strategy, platform choices, connectivity, and standardization.
As the first Taiwan industrial-PC company to participate in ECD’s roundtable, DFI showcased deep Edge AI capabilities: leveraging robust industrial-grade hardware and extensive domain know-how to land AI across smart manufacturing, transportation, healthcare, energy, and embedded entertainment. A clear theme emerged: making Edge AI real isn’t just a “stack more tech” exercise—it’s a strategic, practice-driven transformation.
Demand-led delivery: COTS / semi-custom / full custom for durable edge platforms
From clarifying customer needs and tackling concrete pain points to choosing the right platform and ensuring tight HW/SW co-design, every step shapes Edge AI performance and long-term value. At the roundtable, Jarry noted that organizations evaluating Edge AI center on two axes: being needs-driven and choosing the right platform. Only by balancing both can industries progress toward more efficient, sustainable, and intelligent operations.
DFI adopts a COTS (Commercial Off-The-Shelf) strategy to accelerate phase-one deployment. The portfolio spans multiple form factors—industrial motherboards, systems-on-module, single-board computers (SBCs), embedded systems, Edge AI servers, and panel PCs—helping customers quickly shortlist and onboard the right platform.
When requirements call for tuning, DFI moves to semi-customization: precise adaptations on proven platforms (I/O, subsystems, mechanicals, thermal) such as optimized I/O layouts or custom motherboard designs. This approach reduces complexity, shortens development/certification, lowers risk, and preserves flexibility to match the application.
For projects with unique specifications or high differentiation—e.g., defense/aerospace environments and regulations, special mechanicals or industrial design, or mandated certifications—DFI provides full customization. Drawing on years of DMS (Design Manufacturing Service) experience, DFI meets bespoke specs while controlling cost, schedule, and risk through upfront scoping, modular reuse, long-lifecycle components, and rigorous version control.
This three-stage approach lets customers focus resources on application development and business outcomes, while striking the right balance among performance, power, reliability, compliance, and product lifecycle.
Choosing the Right Platform: Car vs. Scooter, using UberEats as an example
When choosing an edge AI platform, Jarry offered a relatable analogy based on the vehicles used by food-delivery services.
High-performance platforms (like “cars”): In the U.S., most couriers use their own cars—more horsepower and payload, ideal for long distances and heavy loads, matching the geography. In edge AI terms, high-performance architectures such as NVIDIA excel at large models and complex computation. The tradeoff is higher power draw and cost—classic “performance-first.”
High-efficiency platforms (like “scooters”): In Taiwan, couriers weave through alleys on scooters—quick to start, highly maneuverable—well suited to dense urban traffic and short, multi-stop runs. In edge computing, this maps to 24/7 stability, real-time responsiveness, and frontline deployment flexibility with smaller footprint, lower power, and simpler thermals. Solutions from AMD, Intel, NXP, and Qualcomm are like agile scooters—fast to respond, flexible to deploy, and great for on-site, real-time decisions.
Why balance matters: Across regions and use cases, the “car” vs. “scooter” tradeoff—performance vs. efficiency—pushes you toward different optimization strategies. Platform selection isn’t about better or worse; it’s about finding the best point between performance and efficiency given workload, power and cost targets, deployment environment, and O&M conditions. At the edge, raw compute alone isn’t enough—power efficiency, thermal design, and 5–10-year reliability are often just as important. From a total cost of ownership (TCO) perspective, the most powerful platform isn’t always the best choice; the real win is a sustainable balance across performance, power, thermals, and long-term maintenance.
DFI offers a “stable-yet-customizable” viewpoint—an ideal middle path between pure COTS and full custom. Its diversified platform strategy includes NVIDIA for peak performance and Intel, AMD, and Qualcomm for efficiency and flexibility, letting customers select the best “vehicle” for each edge workload.
Jarry also emphasized security. Security was another key theme. DFI follows the IEC 62443-4-1 secure development lifecycle and implements TPM at the hardware layer. In partnership with Canonical, DFI integrates Ubuntu Pro to prepare for regulations such as the Cyber Resilience Act (CRA). Ubuntu Pro for Devices delivers up to 10 years of security and package updates, while Landscape streamlines OTA monitoring and fleet operations—shortening IoT time-to-value and strengthening security and compliance for enterprise-grade, long-running platforms.
Conclusion: Pragmatic innovation × partnership-driven Edge AI
DFI’s Edge AI strategy rests on a pragmatic, sustainable foundation. AI should solve real problems—not be adopted for its own sake. By choosing platforms built for longevity and balancing performance, power, and reliability, the Edge AI market can realize its full potential. With reliability and availability at the core, and more than 40 years of global reach and cross-industry partnerships, DFI helps customers tailor deployments to local contexts and meet diverse application needs—flexibly and at scale.
Watch the full roundtable video: