This repository was archived by the owner on Jul 29, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.xml
More file actions
54 lines (54 loc) · 4.34 KB
/
index.xml
File metadata and controls
54 lines (54 loc) · 4.34 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Kernelize</title>
<link>https://kernelize.ai/</link>
<description>Recent content on Kernelize</description>
<generator>Hugo</generator>
<language>en-us</language>
<lastBuildDate>Thu, 01 May 2025 09:00:00 -0700</lastBuildDate>
<atom:link href="https://kernelize.ai/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Products</title>
<link>https://kernelize.ai/products/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://kernelize.ai/products/</guid>
<description><p>Kernelize uses and actively supports the open-source Triton compiler and language. Triton is widely used to describe optimized GPU kernels and we leverage Triton to quickly target and optimize for new AI accelerator hardware.</p>
<p>Triton already supports autotune to search for supported and optimal kernels, so the main features needed to target new hardware are a modular backend and discovery-based runtime. Most AI frameworks and ML graph compilers already target Triton by default.</p></description>
</item>
<item>
<title>Open Core Ventures announces Kernelize</title>
<link>https://kernelize.ai/posts/welcome/</link>
<pubDate>Thu, 01 May 2025 09:00:00 -0700</pubDate>
<guid>https://kernelize.ai/posts/welcome/</guid>
<description><p>Open Core Ventures (OCV) has just unveiled <strong>Kernelize Inc.</strong>, an innovative AI compiler platform designed to “bridge the CUDA moat” by auto-generating optimized backends for a wide variety of hardware targets. Built on the open-source Triton compiler, Kernelize lets developers write high-performance GPU kernels in Python once and deploy them across GPUs, NPUs, TPUs, and more—eliminating lock-in to any single vendor’s proprietary stack. Founded by industry veteran Simon Waters, whose résumé includes leading AMD’s Triton contributions and co-creating the Catapult C Synthesis tool, Kernelize aims to democratize AI performance and accelerate hardware-agnostic innovation.</p></description>
</item>
<item>
<title>About</title>
<link>https://kernelize.ai/about/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://kernelize.ai/about/</guid>
<description><p>Our team includes experts with extensive experience in Triton and building systems for AI inference.</p></description>
</item>
<item>
<title>Contact</title>
<link>https://kernelize.ai/contact/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://kernelize.ai/contact/</guid>
<description><h1 id="contact-us">Contact us</h1>
<p>Please <a href="mailto:simon@kernelize.ai">Email Simon</a> if you have any questions about Kernelize</p></description>
</item>
<item>
<title>Jobs</title>
<link>https://kernelize.ai/jobs/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://kernelize.ai/jobs/</guid>
<description><p>Welcome to our Jobs page!</p></description>
</item>
<item>
<title>Pricing</title>
<link>https://kernelize.ai/pricing/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://kernelize.ai/pricing/</guid>
<description><p>Looking for access to better AI Accelerator Hardware?</p>
<p>Kernelize’s products will be released as soon as supported AI accelerator hardware is released. We will update this page with more information after each hardware release.</p>
<h2 id="ai-inference-accelerator-hardware-providers">AI Inference Accelerator Hardware Providers</h2>
<p>Our goal at Kernelize is to seamlessly move GPU workloads to your AI Inference hardware. We provide access to an open-source compiler and consistent AI inference solutions for AI inference hardware. Please contact <!-- raw HTML omitted --><a href="mailto:sales@kernelize.com">sales@kernelize.com</a><!-- raw HTML omitted --> if you would like to know more about Kernelize supporting your hardware.</p></description>
</item>
</channel>
</rss>