Interview with Christoph from Helio

15.09.2022

Helio is like the connector to the customers that are buying compute workloads. They make it possible to maintain a level of service that the customers need, also in sustainable data centers.

Christoph talks about what attracted them to ECO-Qube, the technical and business challenges of the project, and breaking up industry silos.

You can watch the whole interview here, or you can read the transcript below.

Mo 0:04
Welcome to the next interview in our interview series on the ECO-Qube project. I'm joined by Christoph Buchli of Helio. Christoph, why don't you introduce yourself and Helio?

Christoph 0:17
Hi, Mo. Thank you for having me. My name is Christoph. I am CTO and Co-Founder of Helio. Helio is a platform that distributes workloads across data centers according to the circumstances of these data centers, such as source of electricity or availability of capacity. And by that we hope to reduce the ecological footprint of those data centers and of course achieve better prices for our customers by using capacity that is idle, therefore, getting better prices for that.

Mo 0:57
Awesome. So you're optimizing the utilization of existing assets?

Christoph 1:03
Exactly. We are kind of like a virtual data center that spans all the data centers that are attached to our network. And in that we optimize the utilization of those data centers because one of the biggest parts of making data centers more sustainable is using the assets.

Mo 1:24
What attracted you to the ECO-Qube project?

Christoph 1:29
Well, the ECO-Qube project is kind of a new approach of doing data centers. If you think about how data centers originally looked, huge buildings and everything is perfectly orchestrated, especially at the hyper scalers. But when you look at how lots and lots of data centers, and lots and lots of servers out there in the world are actually installed and how they actually look, it's not that picture of those huge, perfectly built and managed buildings that were particularly built for data centers. But a lot of servers actually live in smaller environments within the districts. You, know, the industry think that this will continue to grow — this bit of where the servers are more distributed. Some call it edge, or however you want to call it. But a lot of servers in the future will live in smaller, decentralized and on-premise data centers. The hyper scalers already do a lot of work. When you have a huge building with tens and hundreds of megawatts, obviously you are doing optimization in regard to your cooling, your energy demand, and everything. But there is a huge untapped opportunity for all of those smaller data centers where no one is taking care of that currently. And with the software and the solution that we've built in ECO-Qube, this is going to have a huge impact because we can tackle a broader field of data centers to actually optimize them, and not only leave the optimization and sustainability to the big players.

Mo 3:15
Awesome, so you're bringing almost cloud levels of efficiency to edge data centers?

Christoph 3:23
Exactly. And we can, thanks to our platform, even move workloads across data centers, not only within the data center. We can even achieve cloud level resiliency and cloud level uptime for edge data centers without having to build huge hardware measures such as diesel generators and all of that. We can throw all of that out, but still achieve a cloud grade level of service.

Mo 3:54
Awesome. What is your role, what is Helio’s role in the ECO-Qube project?

Christoph 4:03
Helio is kind of like the connector to the customers that are buying compute workloads. Because normally when you design and optimize the data center, that it’s done by data center people obviously and Helio connects the needs that data center operators have to the needs that the customers have. For example, when a data center is consciously built, so that it can fail, e.g., because there is no renewable electricity available and they shut down the data center. Of course, the users don’t want that, and that is what eventually Helio’s role is in this project — that we maintain the service for the customers who buy this compute power, despite having a new class of sustainable data centers that have some circumstances that are different to traditional data centers. Our platform actually found a way, and is a way, of fulfilling the needs on both sides and the needs that the new challenges of more sustainable data centers impose on the data center, especially on the buyers of compute. Helio makes it possible to maintain a level of service that the customers need and also have this in sustainable data centers.

Mo 5:36
There are a lot of technical challenges in the ECO-Qube project, but then there's also a lot of business challenges like uptime, availability, resilience, etc.

Christoph 5:48
Exactly. And we are really the stakeholder to bring that to the table within the ECO-Qube project. We have amazing partners that do the tiniest bit of optimization. They squeeze everything out that's possible on the data center optimization part. And we try to connect that with the business in regards to these newly highly optimized data centers having some different requirements. There are also some different offerings towards the customer, and we bring the business side to that. And of course, our software can do the same optimization that we do across data centers, also inside of one data center. So, we also do the whole workload scheduling within the ECO-Qube project, which means within one data center. Our platform can also handle that. The platform is built in a way to fulfill the customer needs within a data center and spanning multiple data centers.

Mo 6:56
So hopefully, by the end of the project then, the ECO-Qube solution would be just as resilient as any other data center solution.

Christoph 7:06
Yes, exactly that's the goal. And we are working on that. Everything that we learn out of ECO-Qube we also want to actually bring into the market and our technology will be applied to existing data centers. We already do that. And thanks to ECO-Qube, we can just massively increase our efficiency in doing so.

Mo 7:36
Awesome. We're one year into the project, what has been the main progress overall? What has been achieved so far? And what do we still need to achieve in the two years?

Christoph  7:50
For us, currently it's been mainly about building the foundation of doing the workload management, which essentially for us means having all the information of all the different components like of servers, the temperature of the building, and getting all of this data into a data lake where we can then build and optimize our software against. That has been the main challenge, so we have been profiling workloads in regard to their CPU utilization, collecting data for those meters inside of the data center, and defining how this data needs to be structured, so that we then can use that data to make our optimizations. So it's been a year with a lot of groundwork. And we are very much looking forward to the next month, because in the next month we will tie those loose ends together into the decision-making software that actually does the smart workload scheduling. And we feel that we've done pretty solid work in regard to the data so that we can also, once we have built the distribution algorithm, we can then apply machine learning based on the data and on the structured data that we set up and that we are currently collecting.

Mo 9:23
Okay, so that kind of answers the second part of my question, which is what comes next. So, in the next month you'll be linking it up. Previously you've been structuring all the data.

Christoph 9:38
Exactly, all the partners have been working very hard on providing all of the information and also the interfaces to the building and the energy system, which we need to start doing the optimization. Now, we're tying all of that together. That's the next six months in the project.

Mo 9:58
And what are the biggest challenges that you've come across in this project?

Christoph 10:03
Well, for us, the biggest challenge has always been trying to understand. Because. traditionally the principle of cloud is to forget about hardware. So for us as a company, all of the tools that we use, the infrastructure software that we use, they are hardware agnostic, because that's what everyone has been working towards to make it transparent. It just works, whether this disc is attached, or whether it's somewhere else in the data center. And, what we now have to do in the ECO-Qube context is that we have to bring the needs of the hardware, like the temperature of the CPUs, the cooling, and all of that, we have to bring that back into the conscious mind of the cloud infrastructure that we're building. And it's been pretty challenging, because all of the ECO-Qube partners are data center people, and data center people normally to work up to an operating system level at best, normally on the hypervisor level. And then that's it. Then there's a very clear cut. And above are the, I just call them cloud people, but the software people, and those two [groups] have, over the past 20 years worked to minimize their interactions. It's been really a challenge to figure out, okay, how we are going to use these tools on our platform that are amazing for moving workloads, around within different data centers and across data centers, make data transparently available, and all of that. The software is really completely ignorant of the underlying hardware layer and we have to merge that together. That is the biggest challenge. Also the communication, even within the ECO-Qube project where we are a pretty tight bunch, even in here we notice that this is the biggest challenge for us.

Mo 12:07
Interesting. So unsealing the silos that have self-siloed over the last 20 years of industry operation.

Christoph 12:16
Yeah, exactly. And, the development where this is going to go is still into a more highly specialized environment with even more specialized people. But the interfaces between those layers of where the compute lives have to be more talkative to each other, because by now it's just saying here you have your username and password to your server, and that's it. But when there's this increasing level of specialization, this means that we need to better define the interface and talk to each other about what we need. So yeah, it's been unsiloing, but not in the sense of one person doing everything again, because I clearly don't want that to happen. In the ECO-Qube project, it's of course a risk, [the idea] that because these are small data centers, so one company can handle everything from the bottom to the very top to the customer application. And I think that's not the way to go. So unsiloing is more about creating API's and interfaces that properly talk to each other. And for that, the people have to properly talk to each other.

Mo 13:38
So that relationship becomes more real time.

Christoph 13:43
Yes, exactly.

Mo 13:46
Right, okay. What could be the revolutionary effect of ECO-Qube?

Christoph 13:55
The revolutionary effect, I think, is that we can really showcase a scenario, a situation where we can achieve incredible levels of service for the customer with having a highly efficient data center, just by means of software. So, we are not going to put a redundant internet connection, we’re going to put a redundant power supply into these data centers, but we're still going to achieve a level of service as if we had those hardware components. And I think all of the industry knows that this would be very cool, but no one thinks that this is possible. And I think showing that this is actually possible and achieving an unprecedented level of efficiency in those data centers will really have a groundbreaking effect in regards to everyone wanting to have that, because when you show that higher efficiency is possible, everyone will want to achieve that. And I think SMEs and even bigger data centers will highly benefit from that, once we have proven that it works. My hope is that an awakening will go through the industry that essentially is about how we can build highly sustainable data centers, and solve all of the challenges that sustainable data centers have at the software level.

Mo 15:48
So, it's revolutionary in what the customer receives and revolutionary in how it's provisioned. It's a real step change. 

Christoph 16:00
Yeah, exactly. Especially when you compare it to how data centers work today, because as I said, they are highly isolated siloed buildings. Most data center operators operate out of two physical locations, but not more. It will take a while until we can move the whole industry from hardware to software. But we definitely believe, and we want to prove in this ECO-Qube context, that we can achieve a level of service for the customers that they currently only achieve through hardware measures, by moving everything into software. This will be a huge game-changer, definitely.

Mo 16:55
Last question. Is there anything else that you'd like to share about the ECO-Qube project?

Christoph 17:03
That's a difficult one. It's been great fun to see how different stakeholders, different people from different angles think about data centers, think about sustainable data centers. I really have the feeling that we are very close to a big bang in the industry where someone just needs to prove it. I'm not sure if we are the right group to do it, but I definitely have the feeling that we're on the right track. And I definitely think if anyone [can] then we can prove that. Therefore, I think it's really great to be able to be part of this project and to have a real chance of being the ones to prove this. For me it’s great to be part of the cause for this big bang.

Mo 18:08
Awesome. Thank you, Christoph. Thanks for your input, that was Christoph Buchli of Helio, work package four leader in the ECO-Qube project. Thanks for joining us today.

Christoph 18:18
Thank you very much Mo.

Back to all News