Episode 19: Today I am speaking with Koen, an Indexer at The Graph (Mind Heart Soul). Koen joined The Graph during testnet and Mission Control, is a grant recipient with The Graph Foundation, and operates a standalone Indexer.
Our conversation covers a range of topics, including Koen’s entry into crypto, why he decided to become an Indexer at The Graph, what the early days were like for the Indexer community, and how he thinks about Curation and query fees.
The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.
We use software and some light editing to transcribe podcast episodes. Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM). The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).
The following Podcast is for informational purposes only the contents of this Podcast do not constitute tax legal or investment advice, take responsibility for your own decisions, consult with the proper professionals and do your own research.
00:21
I have been working on The Graph for like two or three years already the Edge & Node team. But in many ways it’s still early days, the potential for The Graph is yet to come. The hosted service should be a clear indication with 7000 subgraphs are more, and I think 29 billion queries in May. So that gives some indications of where The Graph is going to go.
01:14
Welcome to the GRTiQ Podcast. Today I’m speaking with Koen Indexer at The Graph. Koen is an exceptionally talented individual who took part in the early days of The Graph with the testnet and Mission Control. In addition to his Indexer operation, Mind Heart Soul (mind-heart-soul.eth). Koen also operates a standalone Indexer that helps other Indexers at The Graph. My conversation with Koen covers his entry into crypto, what is meant by running a node and what an archive node is, and his perspective on curation and query fees. We started the conversation by talking about Koens home of Belgium, and how the people there feel about crypto.
01:57
That’s, I don’t know the general feel, I think, I would say crypto is, is there, but not too much. If I for example, compared to the Netherlands, the country that’s north of us and we speak the same language. I have the feeling that in the Netherlands there are more out there with crypto more interested than we are in the in the Dutch speaking part of Belgium, the if I look at my family, they ‘Oh crypto. What’s this?!. So scary thing. You know, it’s gonna collapse all the you know, all the clichés that you hear of the non-believers. But yeah, time will tell.
02:35
What can you tell us about your professional and educational background?
02:39
I have a bachelor’s in informatics and electronics. And after that, I felt there were still something things listening. So I started to go into evening school. Actually, I learned Spanish first evening school. If you would have told me in high school, I would voluntarily go study a language, I would have told you. You’re crazy, because I’m very much a scientific mathematical person. But yeah, I very much enjoyed studying for like four years of Spanish. And then I studied some more web dev. technologies, you know, the typical web languages, PHP, HTML, CSS. So that’s where educationally I come from. Professionally, the story is a bit more complicated, because I’m limited by my health concerns, but I’m also not the person who can do nothing. So I started figuring out things that I can do. Within my own limitations. When I’ve done a bunch of things, most of the things I do they split into two categories. On one hand, I do a lot of technical work. I’ve done website building I built computers from scratch to the network environments, all kinds of easy or fancy stuff, and other categories, more in the area of events of trainings, either organizing them or hosting them. Workshops, and most of them aren’t technical at all, they’re more in my most more recent work is hasn’t been around well-being about group dynamics. Because it really fascinates me. So I’ve been doing a lot of work on that. Well, not recently because all the COVID measures. But yeah, it’s interesting to sort of have the completely different work. One is very, computer work is very your work on your own and you do your thing and you work with a computer, while the other work is very people oriented, very social in a way, even though I’m quite an introvert myself, but when I’m hosting a workshop, I’m always you know, I enjoy what I’m doing. I’ve done that in a while I’ve done that pretty much across Europe and beyond. That’s always, always a nice excuse to travel somewhere to go. Help people with workshops and trainings.
04:56
When did you first get involved in crypto?
05:00
I was in compared to some others fairly late because I joined somewhere early 2017. And I think I already was hosting VPS servers and running websites and next cloud and stuff like that. Well, at that point, it was still own cloud. So my logical step was to try with a simple master node. I mean, the technologies is quite similar between them, but simple in terms of eBuy, the collateral for $10. So if you mess up, then, okay, it’s, it’s just $10 plus whatever you paid for renting the server. And I got fascinated by this. I mean, I was already fascinated by the idea of crypto, but I got Yeah, I got quite interested in all this masternode stuff. And then from the cheaper masternodes, I went to more. I don’t know how to say it, but reliable and more stable in the long term. And from there, I went to validators. At some point, I was even advising masternode hosting platform in the technology area, and I helped build on some stuff. Yeah, so I started with masternodes got into validators, I ended up at The Graph as an Indexer.
06:14
So how would you describe what the pull of The Graph was for you personally, to get involved in the community so early? As an Indexer?
06:23
Yeah, I mean, throughout Mission Control, I saw what was there. And in a sense, it almost felt like a natural evolution to go from the Mission Control indexing to mainnet indexing. And for me being an Indexer. Yes, I’m very technical in mind. Don’t ask me to do a technical analysis of a coin. I can’t, but asked me to manage service. Yes, please. So it’s a role that that fits me. And that’s why I keep doing it. And that’s why, yeah, in the in the very early days of mainnet, I spent a bunch of time on improving my Ethereum archive node setups, I made a second node. And even a third node that I then started also renting out access to other Indexers, because it’s one of the most difficult and while the stability has been a lot improved, but it’s also very expensive component. So basically, I rent out access to some of the other Indexers, which you know, which helps lower the cost of running more than one node, because if you just have the one, you have a single point of failure, it can take a lot, depending on what goes wrong, it can take a lot of time to get it back up. So basically, a few of us are now having the benefit of having more than one being backed by more than one node. Yeah, it’s a it’s a technical part that really suits me quite well. And that I’m interested in you know, I’m always looking for more things to learn more things to improve
07:53
The name of your index or operation is unique. It’s Mind Heart Soul. I’m curious why you chose it.
08:00
Mind Heart Soul. Yeah, at some point, I was looking for establishing my own website. And I was looking Okay, what is what are options that I feel is matching with me? And Mind Heart Soul is a bit you know, it’s it reflects a bit my own choice of work in the sense that you combine the different aspects of the of the human being, because you know, if you’re all thought and no, no emotion, no, no soul, then you’re missing out on things. If you’re all emotions, then yeah, maybe you’re not making the smartest decisions, either. So I chose this name, in a sense to represent a complete human being, or more or less completely human being, you know, all the complimentary aspects.
08:43
So tell us when you first became aware of The Graph and what you thought,
08:47
Yeah, last summer, it must have been June or July that I, that it popped on my radar, I’ve been doing tests before. And I, I found or somebody told me that, look, there is this company, they’re gonna do a testnet. And then I started looking into Okay, what is The Graph? What is the testnet? And at that point, it seemed interesting, but just a lot more. So it sounded interesting. Okay, interesting idea, interesting testnets, it’s probably going to be big, because the rewards are also quite big. So I signed up, I got invited to the Indexer tested Mission Control. And that was the start of my Graph journey. I was there from the first days of the mission control and joined The Graph. I don’t know, one or two weeks before.
09:31
I’d like to know more about those early days with Mission Control and testnet. What can you tell us about that experience? Was there a sense of community early on? Was there anticipation, lots of questions? What was that experience like for you?
09:45
The two weeks before it started, we were all in Discord like, you know, the 200 of us, our 250 of us. excited now, where are we going to start and then finally, they announced the opening session that was going to help them online. We’re all looking forward and getting in. And then I’m going to guess it was Yaniv who presented, it can still be found online. It’s on YouTube. But yeah, it was very interesting to, you know, to have that first introduction. And then in order to get started, but we needed to work to wait for another workshop. So yeah, there was a lot of anticipation of getting started and getting our hands on the technology. But yeah, it was, it was quite challenging. Because while everything Graph node was there, and existing, because they are using it on the hosted service, it was set up in a very specific way it was, all the information available was specific to running a Kubernetes cluster on Google Cloud Platform. And I felt, okay, that’s really not my thing. I really want to do it another way. So I started creating installation scripts that we could run The Graph on a regular Ubuntu server, no Kubernetes, no, Google obliged, any provider that runs or any server that runs Ubuntu would work. And I immediately decided to say, Okay, I’m just going to publish this to an open to a public GitHub repo that others can use it as they see fit. And so yeah, there was a lot of, you know, you get the information, it was in a format that I felt was not, as I would like to work with it. So I needed to convert, I need to first understand the information that was there, and then convert it to a way that I would like to work with it. And he had those scripts have been used by a lot of people either use as they are or either as their own foundation for creating their own stuff. So yeah, and then we started with learning the basics of The Graph node, the Indexer, the query node, I thought was still quiet, okay. At that point, our biggest challenge was getting an Ethereum archive node synchronized, which was at a time five terabyte in data, more or less, so it would take in weeks to sync and we were trying to find solutions. And then I think, was one of the people who found turbo gate and I synced it, and I made it available, the data file, which was less than a terabyte or was like 800 gigabytes, oh, whoa, big win, still a big win. And I made it available to other indexes on the network, so that we could get started. So yeah, there was a lot of, you know, figuring things out. But there was also quite a good sense of community in the in these early days already. Because it felt a bit like okay, we are in this together to figure out what the hell we were supposed to do. But yeah, when after the, that was zero phase, we went into, you know, the, what is now known as the agent in the service, which we’re still in, in in full on development, because they didn’t need that for hosted service. But they do need that for the Indexer minute. And then we were pretty much in the wild because documentation was spares are non-existing. And we Yeah, we were figuring out things as we were going basically. Yeah, it was an interesting time. Lots of learning lots of trying lots of doing lots of…Yeah.
13:12
So when you say running a node and it’s common, right for Indexers to say they’re running a node, what is meant by that, what is a node? And what do you mean when you say you run a node?
13:23
Okay, the graph stack is quite elaborate. So basically, you start with your Ethereum archive node, because that’s where the data is that you’re interested in. And then you connect basically an Indexer to it that will extract the data from the Ethereum blockchain. And the Indexer itself connects to a Postgres database, that’s where the data will get stored. And from the Postgres database, it gets accessed by a query node, which is another piece of software, where if people want to retrieve information that has been indexed, they can access it. And then around it, we have the service and the service sits actually in in between of the query node, and the user. And the main tasks of the service is to handle the payment for the queries. Eventually, this entire operation cost quite some money in servers and in time, and obviously, yeah, we’d like to see get paid for that. So something needs to handle the payment and that is what the task of the service is all about. And then there is the agent and the agent controls the lifecycle of the subgraphs and the money being allocated to subgraphs. So there is a lot of pieces of the puzzle that need to fit together. And it probably will get even more elaborate as we will add. Now on the index mainly, we have only Ethereum subgraphs within the future there will be I don’t know maybe Rinkeby or xDai or a whole slew of other networks that The Graph is already supporting. So then you would need to add an archive node for those networks as well, it will expand.
15:03
So tell me more about this archive node. And it comes up all the time. And I’ve just never really asked, but what is it? I mean, are there pieces of the archive that live on the computers of Indexers all across the world? Or does each Indexer have their own archive node? How should I think about that?
15:19
There is different options. A bunch of Indexer are running their own archive node or nodes. But there is also for example, I’m pretty sure there is Indexers who rent access to an archive node from commercial providers. Yeah, there are some commercial providers who focus on delivering access to archive nodes like that, or to full nodes. And then there is various types of groups of Indexer, who sort of work together, either by collectively managing them or either like I’m doing that I rent out access for a fixed fee. So that’s it sort of still managed on your own, but not individually, but shared with a few people to mostly for if the people who share it, it’s a combination of cost and allowing to have more resources in the sense that, okay, it’s great to have your single archive node. But when that fails, your entire indexing operation goes down. So if you have a second one, yeah, that’s obviously better. But then you have to pay for a second server or second service who provides it to you, which doubles your costs. So if you can share that with three or five or whatever Indexers, you have more technical capabilities for a better price.
17:56
I want to learn more about this standalone Indexer operation? What can you tell us about that sounds like something unique and something that makes you different?
18:03
Yeah, at the beginning of the Indexer mainnet, I started looking quite carefully at my own operations, I tried to eliminate the single point of failure. So I started with running more than one archive node. And trying to get that as stable as possible, because it hasn’t been the worst and problems we did that got a lot of Indexers struggling. So I focused for about two months on the Ethereum main node and get a smooth and reliable as possible, and applicate. And after that, I had the opportunity to start working with a project outside of The Graph that wanted to use The Graph technology on a different blockchain that was at that point, not supported by hosted service yet, but I researched it and I figured out it was EVM compatible. So yeah, it should work. And I was happy to try it. Because you know, it would have been an interesting challenge. And now we are basically three months further downtime. And it’s been a really interesting experience to work on the standalone Indexer as well. Besides my mainnet Indexer got me experience with indexing on a different blockchain. Blockchain with very low block time, which brought to light a bunch of other or new challenges, handled a lot of production traffic, a lot more traffic than I ever did on Mission Control or on the current mainnet, report to light some more bottlenecks and choke points and things like that. So yeah, I’ve constantly been working to get ahead of the curve to get my own mainnet operations as stable and as reliable as possible. And beyond that, to be really on the edge and on the front of understanding Graph and The Graph technology and where it’s heading. Working with the other project. It got me a lot more excited about The Graph the because it gave me even more insight of the potential that it could bring, it’s really that The Graph is really poised to be an omnipresent, ubiquitous solution that is going to be used throughout the entire DeFi space. So yeah, that was that’s what’s keeping me busy, to keep improving to keep learning to keep challenging, like I said, the very low block time is, is a challenge of itself. Because the available time space that you have to do everything that then on one hand, or in the first place your archive node needs to do and on the other hand that your Indexer needs to process, it becomes a lot shorter, and the things that are learned into that I can transport to other chains that are going to be there. And for those who stopped by Discord, from time to time, they will see that Yeah, everything that is a standalone indexing VCs network, people tend to refer Yeah, you should ask Koen. So yeah, become a bit the de facto resource for that sort of questions. And that’s the I think that’s quite telling of where my skills have gotten to, in working three months online on while I’m a side project to be denigrating to the project, because I spent the time that I’m not spending on my mainnet Indexer I spend on that project. And they’ve grown quite significantly since I joined them. So yeah, that’s what I tried to achieve, to always keep refreshing the knowledge, keep refreshing the expertise.
21:29
So Koen. How would you describe for someone like myself, who’s non-technical? And obviously, there’s enough barriers here that I couldn’t wake up tomorrow and be an Indexer? But I do want to understand the challenges that Indexers face. So how would you describe what some of the challenges of being an Indexer are?
21:46
Yeah, the there is a lot of things that come into play both on the technical part as on the tokenomics. So to say, there is a lot of infrastructure to manage, you need fairly potent servers to run that. And the expectation is that when query volume goes up, that will need to scale quite significantly. But it’s also understanding the dynamics okay. Some time ago, there was a bit of a discussion going on in the in the Discord server about a quite large Indexer, who updated their allocations and the way they had done it, it impacted the revenue of all other Indexers and their Delegators. So it’s important to understand that your own actions have consequences, not just for yourself, but also potentially for everyone else in the network. And like I said, it’s the revenue. So it’s not just the indexes that you’re affecting, it’s also all of their Delegators. So yeah, that’s when the discussion happened. And then people stepped up like, okay, we need to do this better. And this is only for the indexing rewards, which is still fairly simple to grasp. But when it comes to query fees, yeah, it’s a whole lot more complicated. So it’s important that you try to understand this as well as possible and that you that you, yeah, that there’s a bunch of math behind it, and there is a bunch of market behind it.
23:11
So how would you advise listeners that are a lot like me have a non-technical background and participate in The Graph? Just as a Delegator? Should we think about Indexers as though they’re a company, these are professionals that show up to the office or work from home all day, every day, at the task of being an Indexer?
23:29
Yeah, indexing is quite time consuming. It’s all the technical work, it’s understanding the protocol, it’s keeping up to date with whatever is happening in the network, the good Indexer will always read through all the information, there has been a lot improved in the in the communication for The Graph, but sometimes, if you see that other index are facing a similar problem, and you so you read you that what they are reading can help you to run your own Indexer, smoother or better or whatever. Or when you see the same problem, you know, somebody had already his problem. I’ve seen the solution, knowing what problems people face can help you, you know, troubleshoot your own or avoiding your own. So yeah, I think that the best Indexers will spend quite a significant amount of time running their operation.
24:18
So one of the great things about The Graph community is you can participate in multiple roles. Right, so Delegator can be a Curator, a Curator can be an Indexer. So I’d like to ask that question to you. Do you participate in any other stakeholder roles within The Graph community other than being an Indexer?
24:36
Yeah, I’m blessed with a Graph brand. So I’m working on basically my grant is about improving and maintaining the scripts that I made during Mission Control. Those scripts that were very focused on Mission Control and a lot of happened since. So the first while I already completed the fate, my face zero in this case, and that was bringing them up to speed. With all the current developments, and they are now ready for use on Mainnet on testnet, but right now the scripts are individual scripts like okay, what a script to set up the Busker script to set up the Indexer. And what the grant is mostly about is to, to integrate all that. Because the scripts that I had they were needed to connect the pieces together yourself. And the main idea behind the grant is that I will write code that does the connecting of the pieces together for you rather than that, you need to figure that out yourself. So it’s a… it’s a quite big undertaking. But yeah, it’s interesting to work on that it’s it keeps me very close to The Graph and to the protocol and to the technology. So I think, yeah, it only benefits me as a Indexer. That’s one of the reasons why I started doing that during Mission Control it writing those scripts helped me a lot to understand the technology in all its nooks and crannies and bolts, whatever there ought to be because, yeah, I’m writing truth that sort of deal with you know, that in an automated way.
26:05
Well, let’s talk more about that. What can you tell us about the experience you had of being a Graph grants grantee? And do you have any advice for listeners that are contemplating applying for Graff grant?
26:15
Yeah, I was part of the first wave. And in the first wave, it took a lot of patience. Before I got the decision. I was getting like, I wanted to get started working. But yeah, try to find an idea and to clarify, what is your goal? What do you want to achieve? How does the logic community or the larger ecosystem benefits, you know, it’s great that you that you want to work on something that’s fairly real to yourself. But if you’re getting a grant from The Graph, one of the biggest reasons to do that is to Okay, that you want to work on something that you recognize a lot more people benefit from. But yeah, I would say think about what you want to do, maybe talk with some people talk your idea true. But if you feel you have a good idea, then yeah, by all means, work on it and apply for it.
27:02
So Koen at the time of this recording, you know, we’re just a little bit after the first 10 subgraphs migrated from the testnet to the mainnet. And as somebody who’s been around since testnet and Mission Control, I’d be curious about your perspective of how that first migration went? What did you experience? What was your perspective? How do you think that all went? Did it live up to some of your expectations of what you thought that would be like?
27:28
Yeah, I think it was more or less what I expected it to be. We were already indexing and serving those subgraphs. But until quite recently, that we were only getting synthetic traffic, which means somebody was generating traffic to test the system, basically. And what happened with the mainnet migration is that it’s no longer synthetic traffic, its actual users that we are serving. So that’s, that’s really a huge deal to have this migration. It’s a huge milestone for The Graph. On the technical level, there wasn’t that much to know to take into account because we were already there. Um, yeah, random time, I also updated my allocations, again, to take account of the news situation. So there was always some work to do, we know the network change, and you need to, you need to be aware of the changes that are happening in the network, and adjust yourself and your operation to what the network is doing. So not the network, in terms of what is there in in subgraphs and dapps, but also, of course, networks in the sense of what are the other indexes doing, because what I do, can potentially impact all the others, and what all the others do can potentially impact me. So it’s always you know, it’s always in, it’s always in motion.
28:47
So how are things different now for Indexers post migration? What’s changed other new daily activities or changes in activities that you have to perform?
28:57
Yeah, it’s more serious now. Because playtime is over. We are you know, it’s the real deal. Now, of course, if my Indexer were to fail, there is I think, about 150 other indexes who could step in? So it’s not that if I failed, that The Graph goes down, fortunately. But yeah, it’s quite important that, you know, I want to do my part. Basically, I want to make sure that my systems are running as best as they can, especially now that we are serving real users. I think it’s mostly a psychological click off. Yeah, we’re serving actual users now, which is amazing, which is, that’s what that’s what we set out to do. We want to serve actual users who want to contribute to the bigger crypto space.
29:42
So in relation to the migration, obviously, a lot of people were watching query fees, they had a lot of interest in what would happen there. What’s your opinion or your current perspective on the state of query fees and where those might be heading?
29:56
Yeah, I think it’s still in terms of query fees. It’s still really, really early. Right now I expect that the query fees will not come close to the indexing rewards. That might take quite some time. Still, I think it’s also important to know that everything around query fees is a is a very complicated matter where there is a lot of waste to pull out it and to, you know, to you know, it’s a, it’s a constant search of balance. And I think as a collective of Indexer, as a as a network in any at all, we still need to understand better the dynamics at play. Indexer will need to reiterate to setting query fees that are matching for them, but also good for the network. That’s where, where a lot of the math comes in. For those who want to begin, it’s the Douglas function, which is basically the difference between the query fees that you earn in a sense of what you have served, and the query rebates that you will actually get paid, which is a function of the signal on a subgraph, which is a function of how much you have allocated yourself on the subgraph, how much traffic you’re pulling. So there is a lot of variables that goes into it. And it’s Yeah, like I said, that’s one of the important parts that what you do is not just impacting yourself, but also everyone else. And the theory behind this is that if Indexers sort of work together in a way they can all together, get as much rebates out as possible. One of the reasons that is being done is to avoid a race to the bottom. So setting lower query fees, will potentially attract you more query requests, but not necessarily net you more clearly rewards. And that’s the key, the cost realistic that you know, this is a very elaborate infrastructure that we are all running quite expensive. So there are that’s the way The Graph try to implement a certain protection for the network as a whole.
32:09
Koen, there’s a lot of buzz about curation right now in the work of Curators, can you walk us through that role a little bit and how you think about the work of Curators?
32:17
Yeah, so the task of the Curators is basically to signal which subgraphs are valuable. And that should be considered by Indexers to index and serve. And in theory, it’s also a multi-dimensional aspect again, the Curators. On one hand, it can be, okay, do we need more query volume for Uniswap versus Sushiswap, and then they should go more signal to the one that needs the most very volume. But it is also a thing of, okay, at some point, one of the subgraphs might publish in an upgrade to their subgraph. Or there might be even similar subgraphs, or subgraphs set out to do the same or similar things. And there might be different implementations. And then it’s the task of the Curator to identify which is the technologically the most interesting way, or which is the most bug free version. So that’s another aspect of curation. Yeah, I think for Curators, ideally, they understand quite well, the technological implications of subgraphs, and what they mean and how they are implemented. Yeah, if say, there’s two versions of Uniswap, it’s important that you can signal to the most useful one, the most problem free one, so that the rest of the network doesn’t get stuck on a broken subgraph or an unused subgraph or anything like that.
33:48
So if a listener wants to become a Curator, or they want to better understand subgraphs, where should they go? Where do subgraphs live?
33:57
Yeah, so for hosted services, the easiest entry point is the Explorer. There, you can find all the subgraphs that are available on almost your service basically. And with a lot of them, you will see the website of who created it or the dapp it for with a lot of them, you will also see for example, repo where the actual instruction sets are actually living in terms of code, and then you can inspect the code that they have written. Eventually when a subgraph gets deployed, the files get uploaded into IPFS. And then indexes load them from IPFS into their systems, but in terms of analyzing for, for curation purposes, for example, the best way is to go to the Explorer and then find the code basically find more information on the on the project or the person who created it. In case there is for example, competing subgraphs from different creators. It might be good to understand, okay, is one an official is one a contributed by a community member? Yeah, understand why there are two options and what the differences are between them?
35:11
How about the role of Delegators? How do you think about either the importance of Delegators within The Graph Network or their role within the network?
35:20
Well, I like the words that somebody else put to it that the task of a Delegator ideally is to identify quality Indexers. Or that should be one of their concerns. Like, Curators identify quality subgraphs delegated identify quality Indexers and back them up, basically.
36:11
I love asking Indexers this question, so I want to ask it to you Koen. What’s your advice for Delegators when it comes to selecting an Indexer? To stake GRT with?
36:22
Yeah, I think in selecting Indexers there is different approaches that you can take, and two of the most important advices that I would give his take your time to understand the protocol and to do your research on the different Indexers. There is a 28 day un-delegation time, if you chose wrong Indexer so to say you will need to wait 28 days. So it’s better to take two additional days to make a solid choice that you’re happy with for the long run. And the other suggestion that I would make is that I always say that if I were a Delegator, I would spread my delegations across a few Indexers. And of course, I would be happy to take some 100% of somebodies delegation, but the cautionary principle is to select a several or a few indexes and spread your delegation across them. So yeah, take time and spread your delegation is two of the most important suggestions that I would give and take things into account more than No, it’s easy to just look about the financial metrics, even if it’s still quite early days, and things are still gonna change. But there are other things that you could take into consideration. The Graph is a decentralized protocol. So not putting every delegation on the on the largest or three largest Indexers could be a concern that you want to make when you want to support a decentralized network. Because it’s quite important that there is enough Indexers that can sustain themselves in the long run, The Graph is going to expand by quite a lot. We’re going to need a lot of quality Indexers. And one of the things that I always like to highlight is that okay, consider the quality of an Indexer. It can be good to take into account the skill of an Indexer. Like how well do they seem to understand their stuff? Have they been involved a long time? Are they active in the community? Are they contributing? Have they published contributions to the network to the code to bugs identified? All these kinds of things that give you insight into, okay, this Indexer seems to know what they are doing, rather than just trying to copy someone else basically. For Delegators, going to the Discord where a lot of the indexes are and where a lot of the indexes are active, that could be useful to for their research. Because yeah, somehow all the technical, logical and network discussions are happening mostly on Discord. Even if you don’t engage yourself, you can type in your Indexer or their address, and you might see how much they talk, are they are they trying to answer all those questions? Or are they mostly asking questions themselves, which may or may not give an insight on Okay, they are quality Indexer. They understand what they’re doing, or they need to still do their homework, basically, especially in the early days of the mainnet. We saw that we saw questions coming up like okay, you went to Mission Control, and you’re asking this question, you should have known this three months ago, then there is still a difference between people who are pretty much up to date, who keep reading every message in the index channels, who keep doing research of their own, keep improving their setups, and there’s indexes laid back and they just say, Oh, yeah, the announcement is there I need to upgrade. And now I will let somebody else figure out, this difference is still there. And that could mean Yeah, if you’re with an Indexer that has a better understanding of the protocol, it’s good to incentivize that behavior, rather than the sort of more and more passive approach. But eventually, it might also benefit yourself directly or indirectly. Because if we’re incentivizing indexes who really keep in tune with the protocol, the protocol can grow. And as the protocol grows, it will be better for everyone. And yeah.
40:30
As somebody that’s been active in the community, since testnet and Mission Control, I’d be curious, since I know you’re in Discord, I know you’re in telegram. I know, I’ve seen you in the forum. What’s some of the things that you’re seeing Delegators ask or maybe gaps in knowledge or education that you’ve seen?
40:48
I think what I’ve seen is that basically, a lot of the Indexers have a head start, because they’ve been to Mission Control. And for the early months of mainnet, a lot of the Delegators sat, basically, to play catch up in understanding the protocol, and in understanding that an index operation is quite big in terms of the technological human and financial costs. But actually, I’ve seen that quite improve. I’ve seen Delegators that are more knowledgeable. I’ve been, for example, I’ve been quite impressed with The Graphtronauts group on telegram is interesting group, they, they seem to be quite knowledgeable, they are always helping people. And they are clearly they are also for the long run. So yeah, like I think for Delegators, take your time research protocol and understand that this is not just your simple one clicks, five second nodes at that, but it’s a lot more work. It’s Yeah, like you said, it’s almost a day occupation, and to you know, to keep a smooth operation. And right now, it’s still quite in order, I expect the pace to kick up a lot. By the time we see more subgraphs and more networks.
41:58
So we’re at the part of the Podcast where I like to ask guests to define or describe really important terms that members of The Graph community need to understand. So always start with defining or describing what a subgraph is so Koen. How do you describe or define what a subgraph is?
42:12
A subgroup is basically a technical description. So to say off, which data is interesting to the person who wrote it. And that’s a lot of the time going to be specific tokens, specific contracts. When it’s used in DEXes, it’s going to be liquidity pools. So basically a subgraph is a definition of these data is what I’m interested in, eventually retrieving.
42:39
That’s awesome. Thanks, Koen. So how about The Graph? How do you define or describe what The Graph does?
42:44
I would say The Graph is the most epic technology in the DeFi space for many years to come. The more I dig into The Graph, the more excited I get. And the more that I see how useful it’s going to be. I mean, they have been working on The Graph for like, two or three years already, the Edge & Node team, but in many ways, it is still early days, the potential for The Graph is yet to come. I mean, the posted service should be a clear indication with 7000 subgraphs or more. And I think 29 billion queries in May. So that gives some indications of where The Graph is going to go. In the last three months, I’ve been running standalone Indexer, that’s basically running the info section for our DEX for our liquidity provider. And it’s in doing that, I’ve come even better to understand the power of The Graph. I’ve been serving production traffic for actually for more than three months now. And we have seen averages of over 800 queries per second, which translates to 69 million queries in a day. So that was a very interesting experience, I found a few bottlenecks, learned how to troubleshoot them. So that’s going to benefit me in the long run, because I already had that kind of volume and troubleshooting behind me. Basically, at that point, when I started, it was not yet possible. Definitely not in a mainnet, but also not on the on the hosted at this point, hosted service does support the network. But for me, it was a very clear indication of the power and the strength of The Graph technology that is running in all these different capacities to have such a powerful thing to have this technology up my hands and to use it in a way that was not hosted and not mean it but standalone operation, just to serve 96 million queries in a day for a project that actually needed it for a particular one that it and that’s still growing on a daily basis. That’s what excites me. That’s why I think Graph has so much potential. Now as for what The Graph is, it’s basically it’s still difficult to explain. But basically it’s like a, it’s a data provider, in a sense to provide access to blockchain based data in an easier, direct way. And The Graph as a technology, is it a service provider, it’s kind of ubiquitous to what it is serving, it just it has the power to serve the data. It is expanding the amount of networks, it’s possible to index on. But also, it’s agnostic to the content itself. Yeah. So The Graph is there to serve data that exists in in blockchain. And for The Graph itself, it doesn’t matter whether that’s a 2 billion coin, or it’s a 20k. coin, by the speaks of it, it doesn’t matter to The Graph, if there is just the one user accessing the data or there is, all 7 billion humans are trying to access that data. It’s basically it’s a neutral way of transferring data. Because anything that’s there in the blockchain, it can serve as long as it’s indexed. So he doesn’t differentiate in a sense. And one of the reasons that I’m getting at is that in crypto space, in specific, there is a lot of barriers for projects to you know, to gain traction, for example, the exchanges the after listing criteria, and they might only want the 10 biggest ones over the 10, with the most volume or whatever, that thing isn’t really there in The Graph. Even if you’re a small or starting project, you could use The Graph and basically pay for what you need, or half your user base pay for what they consume. And that’s one of the exciting things, especially in the world, where things like censorship seems to be on the rise. And quite unfortunately, grab itself is a neutral technology, it can serve whatever data that needs to be there.
47:00
Something that’s been catching a lot of news and attention in The Graph community is the new partnership with StreamingFast. I’d be curious, from the Indexer perspective, how does that new partnership impact The Graph? I mean, what would be your advice to Delegators, as they think about what this new partnership means?
47:19
Yeah, basically means that a group of very skilled people will work on the development of The Graph and push it forward. Basically, I think, roughly speaking, the capacity to improve and to push forward, The Graph has doubled, because now there is both an Edge & Node and the StreamingFast teams. And they will work on elaborating and improving The Graph technology that what it’s capable of. So yeah, this is really big news. This is it makes me even more positive about The Graphs.
47:50
So can you go back a little bit to what exactly the StreamingFast team is going to be doing at The Graph? Maybe in a little more detail?
47:58
Yeah, I don’t know the details of who is doing what, and they might still be discussing that amongst themselves. But there is a lot of work to be done on The Graph, The Graph is still maintaining hosted service that takes personnel and means there’s a lot of development happening, basically, on the state channels, the scalar implementation, I think there’s quite a lot of work happening on which networks are being supported. And right now, in terms of network support, we are still in the pool of EVM compatible networks. But there is blockchains that are not EVM compatible. And the for me, that’s one of the things that I’m looking forward that it can expand beyond that pool of blockchains. That will be another pool that opens for The Graph that could be so big, when that becomes possible. And I’m quite sure that they should speak in terms of when and not if that’s got to be on the agenda.
48:53
This was great. Thank you so much for taking the time. And listeners should know that I’ve been tracking you for a long time. I remember coming across you very early on in the forum. And ever since that time I’ve been trying to get you to do an interview and here we are. So thank you so much for your patience and letting me bug you for so long. If listeners want to learn more about you or your index or operation, what’s the best way to do it?
49:16
Yeah, can be found quite easily in The Graph Discord. Just tag me in the Discord or send me a personal message there. I try to check telegram also daily. And other than that I have a Twitter that is Mind Heart Soul 4 I need to step up my Twitter game but I’m always working on the servers and the consoles that are sometimes need to resurface. But yeah, I guess Discord is one of the easiest ways because I’m always there processing info about The Graph and The Graph Network. Then my other project is also on a Discord server.
YOUR SUPPORT
Please support this project
by becoming a subscriber!
DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates. This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.
©GRTIQ.com