Standard Kernel secures $20M seed funding to improve ai software performance
Tiffanie Lebel
Share:
Standard Kernel, a Palo Alto-based artificial intelligence startup, has raised $20 million in a seed funding round aimed at improving the efficiency of software that powers AI systems. The investment was led by Jump Capital, with participation from General Catalyst, Felicis, Cowboy Ventures, Link Ventures, Essence VC, and several angel investors. The company announced the funding on March 11 and said the capital will support the development of technology that automatically optimizes AI software for modern computing hardware, according to PR Newswire.
The funding will allow Standard Kernel to expand its engineering team and continue building tools designed to improve how artificial intelligence workloads run on high-performance chips. By focusing on software optimization, the company aims to help organizations get more computing power from existing hardware, which could reduce operational costs and speed up the execution of AI models.
AI-driven approach to GPU optimization
Standard Kernel’s core technology focuses on GPU kernels, the small but essential pieces of code that control how calculations are executed on graphics processing units. GPUs are widely used in artificial intelligence because they can process large volumes of data simultaneously, making them well suited for tasks such as training neural networks and running complex machine learning models.
Optimizing these kernels is a highly technical process. Engineers typically write and refine the code manually, adjusting it to match the characteristics of specific hardware and workloads. This work requires deep expertise in low-level programming, hardware architecture, and performance tuning. As AI systems become more complex and new generations of chips are introduced, maintaining optimized software becomes increasingly challenging.
Standard Kernel’s platform attempts to automate this process using artificial intelligence. The system generates GPU kernels that are tailored to the exact hardware configuration and type of AI task being performed. By automatically producing optimized code, the platform aims to eliminate much of the manual effort required to tune software for high-performance computing environments.
The company says its technology operates at the lowest layers of the computing stack, directly interacting with chip instructions to improve efficiency. In tests conducted with partners using NVIDIA H100 GPUs, the company reported performance improvements ranging from 80 percent to as much as four times compared with widely used software libraries.
These gains could have significant implications for organizations operating large AI clusters. Many companies rely on thousands of GPUs to train and deploy machine learning models, and even small improvements in efficiency can translate into meaningful cost savings and faster processing times. By optimizing workloads automatically, Standard Kernel hopes to make advanced AI infrastructure easier to manage and more productive.
Growing demand for efficient AI infrastructure
The need for more efficient AI software has become increasingly urgent as the scale of machine learning workloads continues to expand. Companies developing large language models, recommendation systems, and other advanced AI applications are investing heavily in high-performance computing infrastructure.
While hardware capabilities have improved rapidly, the software used to control and optimize these systems often struggles to keep pace. As a result, many AI clusters operate below their theoretical maximum performance. Achieving optimal efficiency typically requires extensive manual tuning by specialized engineers, a resource that can be both scarce and expensive.
Standard Kernel was founded to address this gap between hardware capability and software performance. By automating kernel generation and optimization, the company aims to enable faster adoption of new AI hardware while reducing the complexity of maintaining high-performance systems.
The startup’s team includes engineers and researchers with backgrounds in machine learning, computer systems, and hardware optimization from institutions such as MIT, Stanford, the University of Illinois Urbana-Champaign, and Shanghai Jiao Tong University. Members of the team have also contributed to open-source projects and research initiatives related to GPU performance and kernel generation.
Their experience reflects the increasingly interdisciplinary nature of AI infrastructure development, where advances often require collaboration between experts in hardware, software engineering, and machine learning research.
Standard Kernel’s $20 million seed funding round highlights growing investor interest in technologies designed to improve the performance of AI infrastructure. As artificial intelligence systems become more demanding, the efficiency of the software that controls computing hardware has become a critical factor in overall system performance.
By using AI to automatically generate optimized GPU kernels, Standard Kernel aims to simplify one of the most technically demanding aspects of machine learning deployment. If successful, its platform could help organizations run AI workloads more efficiently while reducing the time and expertise required to tune complex computing environments.
