| Page 238 | Kisaco Research
 

Suna Lumeh

Director, Platform & Ecosystem
Portal Innovations

Suna Lumeh

Director, Platform & Ecosystem
Portal Innovations

Suna Lumeh

Director, Platform & Ecosystem
Portal Innovations

As frontier LLMs grow ever larger, the challenge is no longer just raw compute—it’s deploying them efficiently, reliably, and at scale. In this session, Rebellions presents its full-stack approach to AI infrastructure, from advanced NPUs and chiplet interconnects to an optimized software stack and production-ready deployment frameworks. Attendees will see a demo of MoE inference and explore real customer use cases spanning security, vision, and enterprise AI applications. Discover how Rebellions’ approach bridges innovation and practicality—delivering enterprise-grade efficiency, scalability, and lower TCO for real-world data centers.

Author:

Jinwook Oh

Co Founder and CTO
Rebellions

Jinwook Oh is the Co-Founder and Chief Technology Officer of Rebellions, an AI chip company based in South Korea. After earning his Ph.D. from KAIST (Korea Advanced Institute of Science and Technology), he joined the IBM TJ Watson Research Center, where he contributed to several AI chip R&D projects as a Chip Architect, Logic Designer, and Logic Power Lead. At Rebellions, he has overseen the development and launch of two AI chips, with a third, REBEL, in progress. Jinwook's technical leadership has been crucial in establishing Rebellions as a notable player in AI technology within just three and a half years.

Jinwook Oh

Co Founder and CTO
Rebellions

Jinwook Oh is the Co-Founder and Chief Technology Officer of Rebellions, an AI chip company based in South Korea. After earning his Ph.D. from KAIST (Korea Advanced Institute of Science and Technology), he joined the IBM TJ Watson Research Center, where he contributed to several AI chip R&D projects as a Chip Architect, Logic Designer, and Logic Power Lead. At Rebellions, he has overseen the development and launch of two AI chips, with a third, REBEL, in progress. Jinwook's technical leadership has been crucial in establishing Rebellions as a notable player in AI technology within just three and a half years.

As AI-native cloud platforms scale to meet global demand, the data center IT infrastructure powering generative AI has become a prime target for attackers. In this session, Dr. Yuriy Bulygin—CEO of Eclypsium and former Chief Threat Researcher at Intel—shares a case study from one of the world’s fastest-growing AI cloud providers that supports OpenAI, Microsoft, and NVIDIA workloads and how they are offering secure AI infrastructure that their customers demand.

This provider faced the challenge of protecting thousands of specialized servers, GPUs, and system components—without slowing growth. Learn how they implemented a turn-key approach to data center security, leveraging firmware integrity verification, automated vulnerability management, and continuous supply chain monitoring.

Attendees will gain insight into:

  • Why infrastructure threats in AI data centers are rising—and often invisible

  • What does a secure AI cloud look like and how to minimize risk

  • Lessons for building resilient, secure AI data centers without adding operational drag

If you're responsible for AI infrastructure, securing the data center isn’t just an IT concern—it’s foundational to model integrity and platform trust.

 

Author:

Yuriy Bulygin

Co-Founder & CEO
Eclypsium

Yuriy Bulygin is Co-Founder & CEO at Eclypsium. Prior to founding Eclypsium, Yuriy led the Advanced Threat Research team at Intel Security and the microprocessor security analysis team at Intel Corporation. He also created CHIPSEC, the open-source firmware and hardware security assessment framework.

Yuriy Bulygin

Co-Founder & CEO
Eclypsium

Yuriy Bulygin is Co-Founder & CEO at Eclypsium. Prior to founding Eclypsium, Yuriy led the Advanced Threat Research team at Intel Security and the microprocessor security analysis team at Intel Corporation. He also created CHIPSEC, the open-source firmware and hardware security assessment framework.

California faces over 7,000 wildfires each year, with enormous costs to lives, communities, and ecosystems. Responding faster requires distributed sensing and intelligence that can act in the field where traditional satellites and watchtowers fall short. http://Wywa.ai First Responder is an open-science initiative led with researchers from MIT and CMU, together with industry leaders and policy experts, to design and deploy a scalable wildfire early-warning network. The system combines ultra-low-cost LoRa-enabled chemical sensors with edge AI and vision-language models. These distributed “artificial noses” continuously monitor air for smoke and combustion signatures. When risk thresholds are detected, the sensors activate nearby edge vision systems that confirm wildfire presence and generate real-time alerts for first responders and civic authorities. We will present results from early deployments, highlight the LoRa network architecture and AI model training that make such systems deployable at scale, and discuss how open collaboration across academia, industry, and government can accelerate resilience. The session will include a live demonstration of how edge intelligence can empower communities to act in the earliest, most critical moments of wildfire response.

Autonomy
On-Device ML
Robotics
Industrial Edge

Author:

Anirudh Sharma

Researcher
Amazon Lab126

Anirudh Sharma is a researcher and inventor whose work spans human factors, speech and vision interfaces, and system design. With a research background at the MIT Media Lab and now at Amazon Lab126, he builds novel computing interfaces that merge advanced sensing with real-world applications. His first venture developed and shipped gait-sensing haptic insoles to help elderly and visually impaired people navigate through tactile feedback- now used worldwide. He later co-founded Graviky Labs, which turns air pollution into usable materials. His contributions have earned recognition from MIT Technology Review (TR35), Forbes 30 Under 30, TIME 100, and TED Global.

Anirudh Sharma

Researcher
Amazon Lab126

Anirudh Sharma is a researcher and inventor whose work spans human factors, speech and vision interfaces, and system design. With a research background at the MIT Media Lab and now at Amazon Lab126, he builds novel computing interfaces that merge advanced sensing with real-world applications. His first venture developed and shipped gait-sensing haptic insoles to help elderly and visually impaired people navigate through tactile feedback- now used worldwide. He later co-founded Graviky Labs, which turns air pollution into usable materials. His contributions have earned recognition from MIT Technology Review (TR35), Forbes 30 Under 30, TIME 100, and TED Global.

Author:

Navya Veeturi

Program Founder
Wywa.ai First Responder

Navya Veeturi is the founder of Wywa.ai First Responder, an open initiative focused on protecting communities and forests from wildfires through the power of low-cost sensors, edge AI, and generative intelligence. With a background in leading AI and data engineering teams at NVIDIA, Navya combines technical expertise, product vision, and community impact to build scalable, AI-driven solutions that empower first responders, local leaders, and citizens.

Navya Veeturi

Program Founder
Wywa.ai First Responder

Navya Veeturi is the founder of Wywa.ai First Responder, an open initiative focused on protecting communities and forests from wildfires through the power of low-cost sensors, edge AI, and generative intelligence. With a background in leading AI and data engineering teams at NVIDIA, Navya combines technical expertise, product vision, and community impact to build scalable, AI-driven solutions that empower first responders, local leaders, and citizens.

Data Privacy & Governance
Enterprise Use Case

Author:

Anusha Nerella

Senior Principal Software Engineer
State Street

Anusha Nerella is an award-winning AI and fintech innovator known for her original contributions in transforming institutional trading and digital finance. She has pioneered AI-driven trading strategies, real-time big data systems, and automation frameworks that have redefined how financial institutions operate. Anusha’s innovations—from modernizing Barclaycard’s digital payments infrastructure during COVID-19 to architecting intelligent trading models—have driven measurable impact, earning her recognition as a thought leader shaping the future of AI-powered finance.

Anusha Nerella

Senior Principal Software Engineer
State Street

Anusha Nerella is an award-winning AI and fintech innovator known for her original contributions in transforming institutional trading and digital finance. She has pioneered AI-driven trading strategies, real-time big data systems, and automation frameworks that have redefined how financial institutions operate. Anusha’s innovations—from modernizing Barclaycard’s digital payments infrastructure during COVID-19 to architecting intelligent trading models—have driven measurable impact, earning her recognition as a thought leader shaping the future of AI-powered finance.

What does it take to run one of the world's largest AI supercomputers? As artificial intelligence workloads grow exponentially, operating a hyperscale AI cloud fleet demands new strategies for resilience, efficiency, and operational excellence. This session explores Microsoft’s approach to scaling infrastructure for 100X growth, focusing on the intersection of system innovation and advanced fleet management.

Storage

Author:

Dharmesh Patel

Partner, Manufacturing Quality Engineering
Microsoft

Dharmesh Patel serves as the General Manager and head of the Quality Engineering Organization at Microsoft. In this capacity, he oversees the AI Fleet Quality team to ensure AI capacity, stability, and reliability throughout the hardware supply chain from manufacturing to data centers. His responsibilities include enabling Microsoft to scale AI capacity while maintaining high hardware quality standards across all stages of product development from concept through mass production. With nearly twenty years of experience in managing complex products and promoting process excellence within data centers, Dharmesh is a recognized leader in his field.

Dharmesh Patel

Partner, Manufacturing Quality Engineering
Microsoft

Dharmesh Patel serves as the General Manager and head of the Quality Engineering Organization at Microsoft. In this capacity, he oversees the AI Fleet Quality team to ensure AI capacity, stability, and reliability throughout the hardware supply chain from manufacturing to data centers. His responsibilities include enabling Microsoft to scale AI capacity while maintaining high hardware quality standards across all stages of product development from concept through mass production. With nearly twenty years of experience in managing complex products and promoting process excellence within data centers, Dharmesh is a recognized leader in his field.

Author:

Prabhat Ram

Partner, Software Architect
Microsoft

Prabhat leads the AI Customer Experience team within Microsoft Azure. He is responsible for operating AI Training supercomputers for OpenAI and other strategic customers. He holds a master’s in Computer Science from Brown University and a PhD from the Earth and Planetary Sciences department at U.C. Berkeley.

In addition to coauthoring more than 150 papers on computer and domain sciences, his work has been recognized throughout the industry including being awarded the 2018 ACM Gordon Bell Prize for his team’s work on Exascale Deep Learning. 

Prabhat Ram

Partner, Software Architect
Microsoft

Prabhat leads the AI Customer Experience team within Microsoft Azure. He is responsible for operating AI Training supercomputers for OpenAI and other strategic customers. He holds a master’s in Computer Science from Brown University and a PhD from the Earth and Planetary Sciences department at U.C. Berkeley.

In addition to coauthoring more than 150 papers on computer and domain sciences, his work has been recognized throughout the industry including being awarded the 2018 ACM Gordon Bell Prize for his team’s work on Exascale Deep Learning.