Advertisment

“The edge market will likely surpass the cloud market in size”

Krishna Rangasayee, CEO and Founder of SiMa.ai responds to all these questions in an interaction with Pratima Harigunani.

author-image
VoicenData Bureau
New Update
Krishna-Rangasayee

Automation

What makes Edge a much-needed alternative to Cloud and other models? Where does ML computing beat classic computing? Where does analogue computing outshine digital counterparts? Why are there applications in the physical world that cannot entirely rely on a cloud-based experience? And yet, why does the edge market move much slower than the cloud market? Krishna Rangasayee, CEO and Founder of SiMa.ai responds to all these questions in an interaction with Pratima Harigunani. The company has Indian roots and is exploring extensive applications in autonomous vehicles, UAVs, robotics, and Industry 4.0, leveraging Cloud and IoT technologies. Excerpts:

Advertisment

What strides and challenges have been in the Edge computing space, especially embedded Edge?

Artificial Intelligence (AI) has undeniably dominated the computing landscape for the past decade or so, primarily driven by cloud-based solutions. However, the growing need for real-world applications has prompted a notable transition towards a hybrid structure, emphasising the importance of Edge computing. Three key factors fuel this shift.

Firstly, the criticality of throughput and latency cannot be overstated. Physical applications, such as automotive safety systems or robotics with human-machine interfaces, demand instantaneous decision-making, making Edge computing imperative to avoid the latency inherent in cloud-based processes. Secondly, the escalating complexity of AI and Machine Learning (ML) brings privacy and security concerns to the forefront. Lastly, cost considerations play a pivotal role. The expensive nature of cloud propositions, coupled with the impending integration of AI and ML into every device globally, propels a substantial portion of computing onto the Edge.

Advertisment

In the automotive realm, the shift towards embedded edge technology mirrors the evolution from horse-drawn carriages to the inception of automobiles.

How potent is it as a future enterprise market?

I would estimate it to be at least 10-20%. The edge market moves at a much slower pace compared to the cloud market, with tens of thousands of customers versus a few giants. Therefore, achieving market adoption is akin to boiling the ocean. However, when considering the scale, this percentage represents a significant portion. In my opinion, one day, the edge market will likely surpass the cloud market in size. This journey lies ahead of us, and the next decade will drive considerable architectural innovation, with people developing purpose-built platforms for the edge. Up until now, the focus has primarily been on the cloud, accompanied by an AI and cloud narrative. However, we are witnessing the emergence of an AI and edge narrative, which I find particularly intriguing.

Advertisment

What’s the potential for Edge servers and Edge processors, especially in the Indian industry?

In the Indian industry, the potential for Edge servers and processors is a game-changer. Think of them as the silent architects reshaping how we operate. With sectors like smart cities and healthcare demanding split-second decisions, the low-latency prowess of Edge computing is a total win. Secondly, privacy is of utmost importance, especially in finance and healthcare. Edge servers bring data processing closer, ensuring compliance and a reassuring sense of security. It’s like having a personal guard for your data. With Edge processors, there is also a significant cut-down in the cost.

How easy or complex is embedded edge technology in the automotive space? Is it a competitive advantage or a common denominator now?

Advertisment

In the automotive realm, the shift towards embedded edge technology mirrors the evolution from horse-drawn carriages to the inception of automobiles. This transformation unfolds on two critical fronts: the propulsion shift from combustion to electric powertrains and the infusion of intelligence for advanced driver assistance and safety.

The journey towards Level 3 automation, where vehicles assume control without constant driver attention, necessitates substantial advancements in AI and ML. Traditional methods, marked by long development cycles, have left the industry somewhat stagnant. The demand for new Electronic Control Units or ECUs forced technology to age rapidly within a car’s lifecycle, akin to outdated navigation systems compared to today’s sleek smartphone interfaces. Striving for Level 3 automation, combining AI and ML for advanced functions with deterministic algorithms for safety, emerges as the next frontier.

With sectors like smart cities and healthcare demanding split-second decisions, the low-latency prowess of Edge computing is a total win.

Advertisment

Is that the reason why the company is so focused on the automotive sector?

SiMa.ai recognises the immense potential of the automotive sector, which represents a substantial portion of the over USD 30 billion semiconductor market annually and 40% of the embedded edge market. Despite the challenges, we strategically chose to focus on the automotive sector after establishing a solid foundation in ML architecture and software. Our focus is on reshaping its trajectory with purpose-built platforms and value propositions, fueled by our unwavering belief in the strength of our product offering.

What’s the latest intriguing question or update concerning Generative AI that you find fascinating?

Advertisment

I believe the latest noteworthy trend in Generative AI revolves around its accelerated integration into edge use cases. This shift from cloud-centric to edge-centric generative AI signifies a significant transformation. OpenAI’s recent decision to pause ChatGPT Plus sign-ups underscores the crucial role of edge computing, especially for mission-critical applications where real-time performance is essential.

Another fascinating development is the emergence of smaller models, leading to the proliferation of localised generative AI services. This evolution not only revolutionises technical workflows but also holds the promise of positive changes at the municipal level. Cities experimenting with generative AI for real-time transit updates, enhanced recommendations, and traffic management indicate a forthcoming paradigm shift by 2024.

Moreover, buyer priorities are experiencing a seismic shift, with a growing emphasis on software flexibility over brand loyalty. The era of one-size-fits-all chip design is waning, giving way to a demand for software that seamlessly enables AI in products or services at the edge. Industry 4.0 is breathing life into the new factory floor.

Advertisment

Why is adopting a one-size-fits-all approach not advisable for new Edge and ML applications, especially when a CIO considers the importance of IT homogeneity, cost-effectiveness, and maintenance issues?

Over the past 10-15 years, AI has undeniably been the major driving force behind computing trends. AI applications have been the primary driver of computational demand, predominantly residing in the cloud. However, there are applications in the physical world that cannot solely rely on cloud-based solutions.

Three factors hinder the complete adoption of cloud-based experiences, and I have observed these obstacles becoming more pronounced as the industry transitions towards a hybrid structure. While some applications will continue to thrive in the cloud, there’s a growing opportunity and necessity for others to operate at the edge. In my opinion, three things drive this shift.

What are they?

One is throughput and latency. Not every physical application can afford the latency that comes with the cloud.

Second, privacy and security. I think we have, whether right or wrong, become very comfortable with storing all of our personal information on the cloud, and we assume that it is a great place to do that. But now, partly due to the popularity of ChatGPT, there is a heightened sensitivity to privacy and security, and people are asking, Can I benefit from AI and ML without having to publicly share my information on the cloud? Can I do localised processing, where the creator of the data can do the compute and the analysis where the data resides, rather than transmitting it around?” This is true in medical, smart vision, retail applications, and more.

The third element is cost. Cloud is not a very cheap proposition; it is quite expensive for many customers. Consider the scale, where now, AI and ML are going to be embedded in every single device on the planet. Currently, at the edge, microprocessors and microcontrollers make up USD 40 billion in annual consumption. That’s a huge number. And soon, 99% of that will transition from classic compute to ML.

What helped you achieve full characterisation and testing for production-grade releases in just five months?

Since founding SiMa.ai, I have experienced the most satisfying and exhilarating moments of my 30+ year career. Achieving full characterisation and testing for production-grade releases in a mere five months is a testament to strategic decision-making, resilient team dynamics, and unwavering commitment to innovation. This journey reinforces our dedication to pioneering innovations at the embedded edge.

The automotive sector represents a substantial portion of the over USD 30 billion semiconductor market annually and 40% of the embedded edge market.

How does ML computing compare to classic computing in power consumption, costs, resource usage, and space density?

When comparing ML computing to classic computing, the critical metric is Frames Per Second per Watt (FPS/W), emphasising that a one-size-fits-all approach doesn’t hold in this domain. ML computing, tailored to specific applications, excels in power consumption, cost efficiency, resource utilisation, and space density. Unlike classic computing, ML focuses on specialised hardware, optimising the FPS/W metric to deliver superior performance at reduced power costs. Traditional digital computing faces challenges in handling the diverse workloads of ML applications efficiently. ML-specific System-on-Chips or SoCs are designed to meet the unique demands of artificial intelligence, ensuring optimal FPS/W. This tailored approach results in cost-effective, high-performance solutions.

Can analogue computing find any specific use cases where it is better than digital computing?

Analogue computing, with its continuous and parallel processing capabilities, offers intriguing possibilities. In specific use cases where continuous data representation is advantageous, such as simulating physical systems or solving differential equations, analogue computing can outshine digital counterparts. Analogue’s suitability lies in scenarios where the inherent parallelism and real-time processing capabilities align with the computational requirements, demonstrating a niche advantage over traditional digital computing.

Krishna Rangasayee

CEO & Founder, SiMa.ai

pratimah@cybermedia.co.in

Advertisment