Episode 12: Today I’m doing something different – I’m speaking with a panel of guests on the topic of the recent migration of subgraphs to the mainnet. If you have spent any time in The Graph’s Discord or Forum, then my panelists will be familiar names: Martin, Slimchance, Payne, and Jim Cousins. Each panelist represents an important stakeholder perspective and provides valuable insights listeners will find helpful. The objective of our conversation was to take listeners inside the recent migration and address common questions.
The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.
We use software and some light editing to transcribe podcast episodes. Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM). The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).
The following podcast is for informational purposes only the contents of this podcast do not constitute tax, legal or investment advice, take responsibility for your own decisions, consult with the proper professionals and do your own research.
00:06
Hi there, my name is Jim, I am a council member on The Graph protocol. I’m also a part of the relations team in The Graph Foundation, specifically looking at Indexer experience. And on top of that, I also run an Indexer named WaveFive.
00:23
Hey, I’m Martin, and currently work at The Graph Foundation doing ecosystem development and facilitating the growth of the community.
00:31
Hey, everyone, my name is Payne. I am part of the Foundation’s Indexer relations together with Jim and apart from that, I also work on the QA and on the Explorer side of things with the foundation and Edge & Node. And I also run together with a friend of mine, we run StakeSquid, which is kind of a medium sized Indexer on The Graph Network. We’ve been here since the beginning of the Mission Control testnet since July of last year, if I remember correctly.
01:06
And Slimchance.
01:08
I work with Curator relations for The Graph Foundation. And my role is to empower Curators with the documentation, support, and tools they need to succeed. I also act as one of the domain experts in the grant process.
01:21
Awesome. Well, welcome to each of you. And thank you for joining this panel discussion as we take a look at the first 10 subgraph migrations and provide listeners with some insight as to all the things that happened and continue to happen in relationship to this important event. So I think it’s important to set the context for where we are in the roadmap and everything that’s happening at The Graph. And what this recent migration of these 10 subgraph represents. So kind of setting the context, maybe I’ll go to you, Martin.
How should listeners think about where we’re at in the evolution of The Graph and how the first 10 subgraph migration fits in?
02:00
That’s a great question. I think we’re right in the middle of a huge process. So if we take a step back, we have to remember that the hosted service has been around for a couple of years. That’s already like thousands of subgraphs deployed and even if I recall, correctly to…during April, it processed over 20 billion queries. And then on a similar track, like then after launch at the end of last year, and the community did the first step, which was bootstrapping, indexing supply. And now we’ve seen this first and subgraphs migrating, which seems like an obvious next step. But this includes a lot of work by Indexer community, and it’s building the muscle for the next thousands of subgraphs is migrating. So when you have all of this… if you take all of these into account, and you take this bird’s eye view, and there’s so much more coming, thousands of subgraphs are migrating soon to the network. And they will allow applications to become fully decentralized, thanks to The Graph Network. This will change everything, always in my mind, like because it’s not only how the community organizes itself, that it will chang, because we will see new relationships between Indexers and Curators, which let us remember that that includes subgraph developers. But more importantly, how people rethink the stack of applications will change. Data will be organized, indexed, and provided to anyone in a permissionless way. And that will change the way people think about data.
03:35
So Jim, what’s the best way then to describe this migration for people that may not understand it, and its significance?
03:44
Sure. So I guess, you know, just to step back for a minute and just think about what is the migration? What are we talking about when we say migration? Currently, we have the hosted version of The Graph, which is a centralized, you know, large infrastructure that lives in the cloud and serves data to dapps today and hosts all of the subgraphs all +9000 subgraphs, and provides data layer to dapps are free and running alongside that we have our new decentralized Graph protocol mainnet, and this is where all the Indexers live, this is where all the Delegators live, this is where the Curators live. This is where the subgraph developers eventually will live to, although they’ll be living between two worlds for a while. So the centralized service by very definition of the word centralized is one service right? When we move or as we move or migrate as the word we’re using here, we migrate from the centralized service where the subgraphs are being served by one very large, essentially indexing infrastructure to the Indexers on The Graph protocol mainnet of which there are about 170 right now. And those 170 Indexers over the period of the migration will start to take on the responsibility or delivering the data the queries to those dapps, and it will be in a evidently much more distributed decentralized manner. So we’ll have 172 Indexers in theory with today’s statistics, providing data services to dapps. So why is this important, right? This is, this is incredibly important, right? We often talk about decentralization, and you know, some projects, we’ll do our work on one end of the decentralization spectrum, whether it’s more like theater, and they aren’t really doing decentralization. And then you have other projects that are sort of maximalists when it comes to decentralization, and they want their data to be as decentralized as possible, right. And for most of the people that are involved with The Graph protocol, we believe that a data layer needs to be as maximally decentralized as possible. And that’s why the vision of The Graph Network is coming into fruition today. So it’s significance is that we’re moving from the centralized hosted service to 172 Indexers delivering the subgraph data in a fully decentralized way to all of the dapps that are out there and like to use The Graph today.
06:10
Anybody else want to add in on that?
To double down on why decentralization is so important, is because it’s permissionless, censorship, resistant, robust, secure, and also naturally load balanced. And that will be extremely important going forward for dapps that wish to have a truly decentralized infrastructure.
06:32
I thought it might be interesting to start with this question for the panel about something you learned, new or something that surprised you, as a result of the first 10 subgraph migrations. And so maybe I’ll go back to you, Jim, what was it that you learned either about the community or about the protocol itself, following the first migration?
06:52
Well, you know, with very large projects, you know, you’ll often hear aphorisms and sort of common sense quotes about the fact that, you know, things will always go wrong, there will always be something that goes wrong with large projects. And somehow, once you’re in the midst of a very large project, you tend to forget that, and you think that you know, you’ve written your plan out on, that’s how it’s going to go. But the reality of the over the last few months moving from testnet, where we’ve been testing, you know, very high query per second, load testing on into mainnet, what we, you know, what we found is that there’s various different unexpected challenges that you come up against, on a project of this size. And the interesting thing, to me is the way that the community has kind of rallied around these problems, because we’re talking about pretty complex problems. For example, one of the 10 migration partners that was deployed had what we call a fatal error on one of their subgraphs, while the Indexers were sinking it. And what this meant was that the subgraph could no longer be indexed. What usually happens in that situation is that the developer will fix that deploys as a new subgraph. But now that we have money involved in the network, we you know, we have value involved in the network in the form of GRT, what we find is that Indexers actually end up getting punished in you know, rare situations like that. So the community has been realizing now over the last couple of weeks that it’s you know, although we have all the theory in places for the mechanism, there are going to be things that need to change over time. And, you know, the interesting thing to me is, has been seeing the community rally around that, and making noise in the right places to fix things. For example, this fatal error incident that I’ve talked about has already been integrated into the charter, or the charter is how we police Indexers and the integrity of their data and their queries. So we’ve already integrated this incident into the charter so that if it happens again, then Indexers will not be punished financially for that. That’s the most interesting thing that I’ve seen in the last, I guess two weeks,
08:50
I’d like to open it up to any of the others of you have something that interested you or surprised you during this process?
08:56
Well, like Jim said, it’s never easy, you should always expect some trouble, but the most important is to like, have everyone around you and support and move forward with whatever happens and try to improve things over time. But I also want to take this chance to give a shoutout to the Office Hours, because that’s where most of these discussions started. And basically, we chatted every week around these migrations, troubles that we had. And yeah, I mean, people are getting more and more involved as of late night because like now they have actual work to do not just have one single subgraph to index as multiple subgraphs as queries involved. Now they have to scale up their infrastructure, like everything contributed to more activity and more involvement, and it’s really a nice thing to see in the community.
09:51
Can I ask you a question then, Payne? So from your perspective, as somebody who runs multiple Indexers, as you know, sort of, pertaining to that subject, specifically Is there anything that you’ve seen that’s been interesting, like, I don’t know, gas efficiency or performance issues or anything like that that was sort of pertained to having such a large scale operation?
10:10
Yeah, I mean, gas costs were definitely a lot higher at some point. And we didn’t like luckily, we didn’t have to rebalance anything. Because more or less, we were ending with, kind of the same, same returns. But the biggest trouble I ran into was the fact that Enzyme subgraph basically blew up for me on five different machines, before everyone else has that. And I basically just went on a troubleshooting session with Ford and Leo from Edge & Node. And we kind of found out that turbo gas is having some trouble in the way they read the blocks from the chain. So their implementation of if you remember, correctly, we had this environment variable in The Graph node to call by number to have it disabled, right. And all the calls should be made by hash, like in this situation, to have a guess, for example, call by number and then converts by hash. So that is a big problem. We didn’t know about this. Every single week, there’s something new to learn and something new to adapt to. And something new to overcome. Like, even with our five indexes, we were having trouble with all of them syncing very slowly. And we had to import that database from someone in order to like even close the allocations, it was quite a challenge to import 400 gigs of non-optimized database dump into five servers. But yeah, that’s already over. And I also learned something from that. The way you should you should do your backups in a certain way to avoid this kind of thing.
11:53
I think one of the questions that listeners have is about these partners that were the first 10 subgraphs to move over and how important those partners have been early on here.
12:03
They were definitely super important, as you were saying, like there was this issue that arise with the Enzyme subgraph. And it becomes important that their developer is active and knows the community, we definitely seen everyone super active as we have like over 10,000 subgraph developers working on ecosystem. But the way in this case, but in the main developer behind this Enzyme subgraph, jumped into the Indexer conversations when the subgraph broke, definitely made a huge impact on how the challenge was solved. I think that’s the main takeaway from the importance of them.
12:39
How has the experience with these first 10? How will that inform future migrations and the way people should think about are what they’ve learned from it?
12:48
I think it’s informing everyone in the community of different aspects. One, definitely the subgraph developers, need to be involved, as we mentioned before, in the case of Enzyme subgraph, but also, the aspect of the human coordination of not every Indexer is always online, because we live in a in a world where there’s always somebody sleeping. So everything takes time. And you can account for that. But always when you see that happening, and see how long it takes, I think Jim was mentioning even before like, many of them have already upgraded, but not necessarily all of them. And learning about that human coordination is also crucial for the success of the network.
Anybody else have an opinion on that?
13:29
Yeah. So when I think about the things I’ve learned over the first migration period, it relates back to you know, the unexpected challenges that you come up against. I think, as, you know, everybody on here is a leader within the within the community. And the thing that I learned, I think, was that when you have these potentially very complex issues come up, when you’re trying to discuss them with the community, the sooner you can get your thoughts to paper, the sooner that can resolution can be found. That’s been my experience over the last couple of weeks as we’ve dealt with this fatal subgraph error and how we will deal with it in the future. So getting your thoughts to paper so people can see them and read them as a whole piece of, you know, thought piece, I think, for me is very important now as we go on, and we face more challenges.
14:19
Yeah, I think Jim pointed out really well that those relations between the subgraph developers and the Indexers will develop over time. And the community will be held strongly together by those relationships, because everyone will be helping each other to not make mistakes, because now, mistakes will cost money, and no one wants to pay money for stupid things like having a subgraphs blowing up, right. I mean, everyone learns we indexes learned the hard way that what happens when a subgraph is failing. Developers also learn that it’s not a good idea to punish them for this kinds of things depending on the scenario. So everyone is not working together to nail those things out and make it easier for future subgraph developers to have an easier life when they were going by and deploying their subgraphs to the mainnet.
15:16
I could add another learning is maybe Jim or Payne can comment better of this. But I think one of the biggest learnings also on the Indexers side was the difference of Indexer servers running color on the testnet, they definitely were able to test everything and saw it working, not only on the query side and seeing, you know, like, I think 7000 queries per second, but also how easy was to upgrade? I think, Jim, you mentioned that it was the maturity of the software that you were experiencing, as everything becomes easier to handle. Everyone that was on testing was seeing that weeks before. And there’s so much value of being active there that I think it cannot be measured in words. But definitely there’s even some confidence when you’re upgrading. It’s like you’ve seen this working for weeks on your own infrastructure. So there’s a lot of value and learnings there for sure.
16:07
Yeah, I think during migration, we saw the value of different stakeholders, taking ownership of the network and coming together subgraph, developers, Indexers and The Graph core team solving not only technical issues, but also questions regarding governance.
16:27
From maybe the Indexer perspective, how were Indexers working with these particular 10 partners? Was there really close communication to make sure everything went well and was seamless that they could give feedback in both directions? How should we think about what it was like behind the scenes with these particular 10?
16:46
All indexes didn’t have much to discuss with those partners, we just saw the subgraphs being published, and everyone took action on their own. One stages of migration will end and then everyone will be able to publish their own subgraphs, won’t be any communications, like ‘Hey, guys, I’m about to publish a subgraph’ or anything like that. So I have to prepare for this. And you have to be dynamic. And you have to act on all the opportunities as you get.
Jim, what would you want to add?
17:17
I think we were very lucky, in terms of, you know, there being eventually many 1000s of subgraphs, we’ve gone in with our first 10. And, you know, shout out again to Foobie. Foobie, the developer of the Enzyme subgraph was, you know, the moment that things were going wrong, and people were reporting issues in the Discord chat, Foobie was there answering questions and trying to figure out exactly what the issue was, and then taking ownership of issues. And you really couldn’t have asked for a better migration partner than that somebody who’s proactive and wants to get involved in the process and already understands deeply how the subgraph development process works. They’re the best people to get involved then in how planning and putting structures in to make sure that once we’re scaling up into the 1000s of subgraphs, then we can overcome these sorts of problems we’re having at scale, right? The reality is that subgraphs have fatal errors, errors that stop us from indexing them. This is the nature of the beast, as it stands today, it’s going to happen again. So we need a framework in place to tackle that every time it comes along. And we’re in the, you know, this is not something that was never thought of, it’s always in the pipeline that we need a solid, relatively decentralized process to deal with this. It’s just now we’ve got it for real. And to be honest, I’ve got a lot of confidence given that the Indexers obviously now have capital on the line. So they’re very keen to make sure that anything that impacts their bottom line, and the bottom line of their Delegators is addressed quick, the subgraph developers, they also have, you know, economic interest here, because they have to signal on their own subgraphs, which obviously cost them GRT. So all of this process is tied to real financial consequences now, so I’m quite confident that with, with developers like Foobie, and with the Indexer community, and with the assistance and the guidance and the governance of the Council, and also, you know, sort of the continuing guidance from Edge & Node, and the wider community will we’re going to find a way to manage this long term.
19:18
Slim, I want to ask you a question related to Curators, you know, this is one of the things that particularly members of the Delegator community are really interested in to see more activity from Curators and seek curation services turned on and so what are you seeing within the Curator community currently?
19:36
I see a lot of expectations of the upcoming curation launch. And if everything goes to plan, we will have Curation going live in the coming 30 days. And at that point, we will have the Curation UI and guides ready.
19:50
How important is that milestone for The Graph community? I guess the question is Delegators are putting a lot of interest in Curators and curation services and maybe it’s because a lot of them want to be involved in curation. But what’s your opinion on that?
20:05
I think curation markets are going to be a cutting edge shelf, crypto economic primitives, in general, it has never been done at this scale and level of complexity before, Curators will have an integral role in actively assessing subgraphs. So I think it’s very justified that people are excited about it. But it also carries a high risk, especially early on in the network as nothing is settled, and no kind of equilibrium is found yet. So I would say to Delegators that are looking at curation to be excited, but also to do your research and be prepared.
20:42
Any of you others have advice for members of the committee and maybe Delegators, specifically, who want to get involved as Curator. And now that you have this new perspective, having seen these 10 subgraphs migrate, does that inform what advice you would give?
20:56
I guess, two points of view. So from the sort of just, you know, I always talk about putting hats on, right, so put my Curator hat on for a minute. So during the mission control, we did have a taste of what curation was like. And what I learned from that is that you need to be very, very cautious if you don’t understand bonding curves correctly, because one of the main dynamics within the curation mechanism is this bonding curve for you know, buying signal on a subgraph. Now, I’m not going to go into the dynamics of that in any detail, because I find it quite complex myself, a lot of math involved. But what I do know is that, you know, in the sort of crypto space in the DeFi space, we have this concept of ‘APE’ing in, right, so you don’t think about what you’re doing, you just throw your money into something. What I know for a fact is that that’s the worst possible thing you could ever do in a curation space where you are bonding curves, because you may end up taking out less than a percent of what you put in if you’re trying to speculate. So you know, from my own point of view, and I’m pretty risk averse person, you know, if I was going to do curation, and I really did feel like I need to play the game from the beginning, then I would be playing with a very, very small percentage, like single digits of whatever my block of money is for curation, just to learn about it and be able to talk about it, you know, with some authority, so that’s what I would recommend to Delegators now if you are a very risk averse person, I think most alligators probably are because the Delegator role is kind of designed that way, right? You’re a very risk averse person, just stand by and watch what happens. Pay attention to the Curator channel, and listen in to what’s going on. Now from the Indexer perspective, there’s also a perspective for the Indexer when it comes to curation. So just to give you an idea of where Indexers are right now, Indexers have a couple of choices. Well, they are they have unlimited choices for strategy when it comes to allocating. But there’s two main strategies that can be followed right now. One is allocation optimization. So what that means is that the Indexer will take his stake and his delegated stake, and he will allocate them so that they make the most GRT per hour per GRT allocated. That’s what most Indexers have been doing, if you follow things on-chain, since migration started. Now that we’re into the migration phase, for those that don’t really look at the stats or the signals right now, just to give you an overview, it’s very simple right now. So we have 10 subgraphs, plus one old subgraph that was from the first testnet, and those 10 migration subgraphs have roughly the same signal on all of them. So what that really means is that Indexers should be spreading their allocation out relatively evenly across all of those partners if they want to maximize their query fees. Some Indexers are choosing to do that. Anyone that follows me on twitter will know that I put out a sort of thesis on the migration for wavefive, which is signal waiting, so I want to support as many subgraphs as I can during migration, other indexes might not choose to do that. When curation comes along, and this is the important bit, those signals are going to change. Right? So that’s the thing to be watching out for, even as a Delegator. Is your Indexer following the signal, or are they doing something completely different? I have my own personal beliefs on which is the thing to do right now. But you know, some others might disagree with me, I might be profit maximalists. But I would definitely as a Delegator, be paying attention to those things. The best places to pay attention to those things are graphscan.io. If you can go in there and look at your Indexer and see where they’re allocating to, and how many different subgraphs are allocating to how much they’re allocating to subgraph. And then there’s also stakemachine’s dashboard, which gives us sort of a more in depth view of what’s going on both for you as a Delegator and for your Indexers. And I believe he also has slim, correct me if I’m wrong, I think he has Curator dashboards in the works as well.
24:41
Yeah, at least he has it in the UI. So I believe so.
24:45
Yep. So that’s coming as well. But I guess that’s my two points of view. On the curation side of things from an Indexer point of view.
24:53
I would suggest for sure to go read This Month in Curation newsletters that Slim puts every month especially the first two editions cover a lot of risk and benefits and a lot of links to understand how bonding curve works. So you get all the learnings before the UI is ready to launch curation.
25:12
I would also like to add to that, Dave’s presentation 10 minutes into the pace through workshop of the Curator program. He goes through how to assess a subgraph. And it’s a very good primer to understand how subgraphs work, and how to determine if it’s a good subgraph to curate.
Payne you want to add anything?
25:33
Not really, other than curation will be awesome. I can’t wait to study the market dynamics and to see how everything changes whenever the signal on specific subgraphs change. Like how would Indexer react to that? It has a lot of things to put down on paper and visualize basically, it’s gonna be fantastic.
25:55
What did we learn about the Indexer community? As someone just watching, I saw different Indexers respond in different ways, what’s been the insight or anything we’ve learned about the Indexer community itself, following the migration? Jim, we’ll start with you.
26:09
I guess, as you know, working on the Indexer relations, or Indexer experience side of things. One of my jobs is trying to make sure that when we have these big iterations, you know, these big software upgrades, we try and get as many people as possible onto those upgrades as soon as possible. And be honest with you, I felt like the probably the first 24 hours I was wasn’t getting the response I was expecting to see in terms of getting Indexers and to upgrade to the version of the Indexer software stack that sort of unlocked the mainnet transactions. But now I look in the you know, if where are we now, I think it was Tuesday, the 25th of May, I believe that’s the day that 15.1 of the Indexer stack came out. And the Indexer chats are on fire, everyone’s getting upgraded. But what I have noticed, and it kind of plays back into this, I mentioned, you know, when talking about curation, signal weighted allocations, those are dividing your allocations you and your Delegators allocations across all of the migration partners. But you know, if you take a look at network.thegraph.com, you can actually see how many indexes are sort of spreading their allocation out amongst all 10 subgraphs and who are not spreading them out. And you know, there’s a mix of things happening there, what I would like to see is more people supporting more of the subgraphs. For now, even if it means it’s going to, they’re going to be making less profit per GRT allocated. So it’s been interesting to see that gradual change. So I was going and looking at the numbers this morning. And for the first time, I noticed that we were almost optimally allocated across all subgraphs. For the first time since migration started for a long time, things were extremely unbalanced, in terms of where a stake was being allocated. But now it’s starting to even out across the partners. But I’m just hoping to see more of that as people are getting up to speed on mainnet, people are also getting up to speed on testnet, to make sure that they can start testing the new things that come down the line, like vector got scalar on mainnet, vector integration is still to come. I guess that’s my initial observation.
28:12
I’m happy to see a lot more people being involved in all these discussions, especially during office hours. So with being a thing I love more participation lately, and also in the chat. So outside of office hours, but yeah, it’s amazing to see so many people wanting to, like actively participate in the community and actively solve problems whenever they are caught, and actively be involved in the growth of the network. What else can I say? It’s, it’s amazing,
28:42
I think you’re pointing their pain about seeing more people participate is really important. Because I feel like maybe for the first You know, sometimes you feel a bit lonely as a leader or or, you know, assumed leader within a group. And you don’t feel like there’s new faces coming up to the surface that are sort of making the effort to contribute to the community. But now that we have money on the line of value on the line, more leaders are starting to come to the forefront in the Indexer space. And I expect the same thing. We’re seeing the same thing happen in the Delegator space. And I expect to see the same thing in the Curator space as well. So I’m, I’m feeling very positive about that. Whereas, you know, in the sort of long period of anticipation of moving from testnet to mean that I felt like things were dwindling in terms of index or active participation. But I think it’s really kicking off now. And we’re seeing new names come to the forefront. They’re really, they’re really contributing useful and valuable things to the community.
29:39
One thing that I noticed as just an observer about the Indexer community following the migration was there was a bunch that seemed to leave in other words, the total index or number appear to drop, assuming my data is correct. Were any of you surprised by that? Or would that be something we would expect kind of weeding out people that are participating? Or able to participate. Any thoughts on that?
30:04
I was planning to investigate exactly who those people were, and why they decided to just stop bringing the Indexer business operations. But one thing I could guess is the fact that if you have a medium to large size that I don’t think that’s the case for those guys. But if you have a medium or large sized, then your Indexer expenses go up linearly, like if it starts allocating towards multiple subgraphs, and basically pay a lot more. That could be one of the factors. The other factor is that maybe they didn’t want to bother to learn how the protocol works, you know, because until the migration started, we basically just have one subgraph, no matter how bad you were, you could technically do your job. Well, there was no performance KPIs present at that point. But now things change. And they probably just don’t want to bother. I don’t know.
31:03
With the network you know, we talked about this all the time, on Office Hours Payne, we talk about the idea of network incentives, right, and how the network incentives for the network are going to change over time. As things stand today, in terms of the GRT market, it says a Delegator driven market. And in terms of the network incentives, there is a bias right now towards the GRT. That is earned via Indexer rewards. And the ultimate is not an end goal. But maybe the steady state date of the network is some sort of balance between the Indexer rewards in the query fee rewards, right, and we’re not there right now. And it’s going to take time for us to grow into that space. Though, I also sometimes wonder if these Indexers that fall off the grid or maybe falling off the grid, because they don’t see that happening soon enough. And maybe their stake is small enough that it’s quite difficult for them to break even with the allocation they have.
32:02
Yeah, I mean, at some point, it was costing upwards of $400 per location per action taken per close and open. Right. And if you don’t have at least I think it was 1 million GRT. Last time I calculated, you were basically breaking even at the first time you were closing and opening your allocations in a month. And like I said, I don’t know exactly the details of who those people were and how big their sake was. But the gas prices also could have played a role here. Just wanted to point that out.
32:35
But it’s also very clear, you know, from more positive, you know, points of view on the number of Indexers there’s a hardcore 20 Indexers that you know, are very active all the time, and are the core contributors in the network. And some of them are the very biggest in terms of self-stake on the network or in total delegation on the network. And when times get hard, and people start thinking about bear market and stuff like that, those are the people that you’d look up and they’re still around right there. The people that really believe in what we’re doing here. And the tail end of it is just, to me, it’s the natural ebb and flow of, you know, a living, breathing network, you know, you could maybe think of it in the same terms of, I don’t know, ETH nodes going on and offline, right? People turn them on and off for various different reasons. I know there’s not so much of an economic factor there. But it’s the same idea that there’s an ebb and flow of active Indexers in the network. And we have seen new names come around as well. People that have graduated from the testnet, and are now staking on Mainnet and their fresh faces. They weren’t on Mission Control. They just believe in the vision and they’ve put the capital to work to get involved as Indexers.
33:41
So out there, I think it’s also worth highlighting that even with these changes, as the network is maturing, which I believe is the nature of our live network there, there won’t be always the same amount of Indexers there won’t be the same amount of participants in the different phases. But it’s interesting that the total staked GRT remains at an all-time high. Around 27% of the supply. Yeah, 2.7 billion. So I believe that there’s also a place where maybe it’s because all the talks with regarding the arbitration charter and all the investigations that are happening around in mainnet currently, maybe they just stopped their Indexer and are delegating. I mean, I haven’t done the research on the specific addresses. But it’s worth asking ourselves that even to the current stake, it’s on the highest part of what we have seen in the past five months.
That’s a good point.
Anything to add there slim.
34:36
But in times like this, I think it’s a great opportunity for Delegators to learn more about the Indexers who are proactive, participating in discussions and looking to learn. I don’t think Delegators should focus too much on create a piece or the actual rewards as Jim and Payne has touched on, it’s not the most important lightener during this migration phase. But instead how they act through self-participating learning, because everyone has something to learn, I believe.
35:08
Another question and I feel like I need to ask it on behalf of all the Delegators that regularly listen to the podcast, it seems like there was this potential red flag that an Indexer was only in Pool-together following the migration. So it was like, when are they going to participate in these other subgraphs. But there are also the other side of this, which is there’s Indexers that have gone all in on one of the 10 new subgraphs, they’re only in one of those. I guess the question is, is that a red flag to or does it make sense that Indexers more subgraphs migrate over might go all in on one particular subgraph? How should Delegators think about that?
35:48
As Jim was talking earlier, it all depends on what your target is, if it’s profit maximization, or if you just want to serve crews for all the subgraphs regardless of the Indexer, or like Inflationary rewards, profit maximization, you know, but speaking from the point of view of profit maximization at that given moment, when the subgraphs were added into the network and signaled upon it would have made a lot more sense, capital wise to allocate to those new subgraphs because, I don’t know, a few days’ worth of time when those rewards were like literally double triple compared to the Pool-together and some Indexers just took their damn time, I mean, seven days or even more in some cases, and they realize, like, ‘Hey, something happened’. And, yeah, that’s the side effect of not keeping your eye out on what the network is doing, you know, but to answer your specific question, it’s not a bad thing. If you if you see an Indexer indexing only one subgraph. From an economical perspective, I think we have to have two of our machines indexing only an Enzyme, I believe, or some other subgraph. But in terms of how balanced it is, from the rewards, points of view is basically the same as if you were spreading to all the subgraphs based on signal.
37:21
What it really means is that an Indexer is optimizing for profit. So Indexer rewards rather than query fees. I think I mentioned that earlier on, there’s this kind of dynamic going on right now. And it’s the nature of a network, right? So what are these Indexers that are sort of allocating to single subgraphs are doing is they’re filling a gap where they can see an optimal GRT per hour, you know, can be earned. And they’re sacrificing sort of the curation side of the network or the query fee side of the network in order to do that. Now, some would say that that’s or, you know, I think it’s pretty simple statement or agreeable statement to say that, that is a rational thing to do, if you’re trying to maximize profit for your Delegators. But is it necessarily the best thing to do for the network? While it’s bootstrapping? Maybe not, in my opinion, is not the best thing to do. But you know, every Indexer has their own strategy. And every Delegator, as you know, has the right to question the strategy that an Indexer has taken on
Martin, Slim, do you want to add anything?
38:17
Every Indexer has a specific set of context of how they operate. So some Indexers are, you know, two people operation like Payne was saying in his situation. Some others might be, you know, a huge company, which allows them to have a full time employee and allows them to be closing allocations every day, for example. So that coupled with their strategy towards the network, it’s going to be driving on many subgroups, they’re going to be indexing, or even like, how big is their infrastructure will probably direct how they will approach how many subgraphs they are indexing, so when, instead of having 10, we have, you know, hundreds or 1000s of subgraphs, will all of them be indexing or 1000s of subgraphs? Probably not. That’s not necessarily a bad thing. Depends on again, their infrastructure, their hobbies, their operation, from the human resources to its intensive, you need to be paying attention to what’s going on the network or to react to the network.
39:16
Recently, I had Zach Burns as a guest on the podcast and had him explained Scaler. He said, the network doesn’t work without Scalar. What have we learned or what’s happening with that, as the migration occurred?
39:28
You know, I don’t think it can be underestimated the leaps and bounds that have been made with The Graph software over the last six months. For anybody who’s sort of been close to The Graph testnet and has watched the Edge & Node team, Janice, Ord and Zack especially grinding away at this bleeding edge problem that we have in this space around micropayments, you know, at scale, to see that iterate over time. See how difficult and how much trouble we had with it in the early days. And that would have been around epoch 90 on the network, right? 90,100 earlier than that, and now we’re, you know, we’re not a million miles away from epoch 200. And it’s night and day. It’s absolutely night and day. So back in the early testnet, on my own Indexer, I would struggle often to get into the 10s of queries per second without having problems with the payment mechanism, the micro payment mechanism that we were using state channels. If you compare that now to the scalar implementation that was deployed in the smoothest manner you could possibly imagine, from testnet to mainnet Tuesday, the 25th of May, you wouldn’t believe the difference is night and day. On the testnet, I had two Indexers running as test Indexers at the same location. And between them, they were, you know, they were hitting, then this is test traffic as simulated traffic, right. So it’s pretty light in terms of how much time it spends in the database. But I was hitting something in the region of over 2000 queries per second, which is just, it’s just I couldn’t believe it when I saw those numbers, and I didn’t see errors beside them. And that has that has gone on to the mainnet as well, right. And we used to have, we have a list of about 100 errors, right, that are well defined within the Indexer software. And we used to have, you know, you’d have times where you have 10s of 1000s of them coming in every five minutes. But with the work that the guys have done on improving things and introducing Scalar. You know, what we’re seeing on Mainnet is a real product that is not suffering from those types of scaling issues that we were having before. It cannot be underestimated how much of an edge an Indexer has, by spending time on the testnet. A lot of indexes right now are chasing to get back onto the testnet because they’ve now realized this, having seen how smoothly things went on the mainnet launch on the 25th of May
Payne you want to on add anything?
41:55
Yeah, it’s been crazy. Looking at the old Mission Control screenshots that I had. Yeah, it’s a difference, night and day difference yet. But also, I wanted to point out that there was one more thing that also basically contributed to those errors not showing up anymore, Jim is the fact that now Indexers can sync up their own network subgraph. And basically, that removes a lot of the dependence on the gateway itself.
42:25
Yep, so just to sort of explain that more in layman’s terms. Prior to the last mainnet release the Indexers, they do this sort of polling communication onto the, there’s basically a subgraph, we saw use our own subgraph for the network. So there’s one for the testnet, there’s one from mainnet, and we use that subgraph in order to work out how things are going on the network, how are Indexers doing on the network, and making sure that we’re the Indexers software is making the right decisions at the right time. And what that resulted in was basically all these Indexers are just hammering this gateway. Well, this GraphQL endpoint all the time, in some ways, committing a distributed denial of service attack, right. So flooding the gateway with traffic and requests to the point where it just can’t handle the amount of requests that are coming in. So one of the major software upgrades that was made as part of the old 15.1 deployment was they re-architected the software so that we could actually sync that subgraph, our own subgraph, or the network to our own Indexer. So we no longer once you have that subgraph indexed, you just query it yourself in your infrastructure, and you’re no longer reliant on the network itself, to answer those questions for you about how am I allocated on the network, you know, all that type of stuff that the software needs to know in order to make decisions. So it’s a huge upgrade.
43:52
And it’s also cool that it has a failover system and into in case your subgraph falls behind chain handle fails, or God knows what happens. It will go back to the gateway subgraph. And it adds more resilience to your entire machine and infrastructure.
44:09
Yeah, resilience is the word right. That’s what we see all the time, incremental improvements in the resilience so that the, you know, the software performs better. And when we have less unexpected issues coming up, there’s this constant stream of iteration. If you’re familiar with GitHub, you’ll see that, you know, the team are constantly working on new PRs to improve the software on a daily basis, and they’re very receptive as well. Shout out to Ford from Edge & Node who’s always on the office hours, as much as he can be answering questions and also taking suggestions for ways to increase the resilience of The Graph agent that we all use every day.
44:47
Let’s talk about Curators for a minute here, Slim, what advice do you have for members of the Curator community that want to start getting more involved?
44:55
Curation does have quite a lot of risk associated with it. So I would advise to research creation carefully. I would start at ‘This Month in Curation’ that you can find on the columns. And I would also look at the Curator workshops from the test that Curator program. And especially, I would look at Dave’s presentation in the page two workshop, where he explains how to assess subgraphs.
Anybody else have an opinion on that?
45:24
Definitely jumping to learn the nuances of building surprise. So as Slim was saying, the workshop by Dave during the Curator program last year, it’s a great place to start learning how to assess schemas and learn which subgraphs are the ones that you want to curate will provide a good edge when curation is live, also I mentioned before, like the This Month in Curation newsletter by Slim provides a lot of value on how bonding curve works. And that also something very important in that role, I would say also become active in the Discord community, the Discord server, The Graph is huge, I think has over 22,000 people right now. And the Curator, even if it’s not live there, jumping over always asking, like what’s going on? What’s the update, and learning about the different nuances? I’m sure like, there’s going to be a lot of rallying around as it happens in the office hours of Indexers, like Indexers gather around and discuss the different, you know, situations that are seen in the network. And the same is going to happen with Curators so that Curators are active and are learning as things go live. And things happen and new subgraphs are employed are the ones that are going to become more knowledgeable.
46:33
It’s a little nuanced, Martin, but does the advice change for subgraph developers? Is it the same advice for Curators? Or how would you I guess, change your answer if we’re speaking to subgraph developers?
46:44
Well, I see subgraphs developers as Curators, and I say we need to signal on their soccer so Indexers picked them up and start indexing them. I would say the difference there is like how technical that person is going to be. If you’re a subgraph developer, very already how to write the schemas and properly assess or assess more easily if there’s other subgroups that you want to curate if they’re like written properly. And they’re interesting for the community to be indexed by many Indexers. If you’re not the technical, that’s where all these resources provided by Slim, are super important. So again, starting from the bonding curve, so you start to learn how they work. And then the different risk and reward of Curation are great places where to start. If you don’t know a lot of how you know, smart contract works, bond curve work, curation works in general. And that’s gonna be like the first step also, to interact with Indexers. Some of them probably are already Delegators and already know which Indexers are more active, but I trust more Indexers. So when you see them more active on the community that also provides a lot of information for them.
47:53
So turning the attention to Indexers then either new Indexers, or Indexers that need to elevate their activity within the network, Payne and Jim, what’s your advice to those members of the community?
48:04
So I guess it’s pretty similar for new end Indexers. And I guess, seasoned Indexers. The most important thing to do right now, if you’re already on Mainnet is to make sure you’re upgraded to 0.15.1 Graph Indexer stack. Now, if you’re brand new, and you’re new to the network, and you want to contribute a new you know, at some point, you might want to put the capital together to get on Mainnet with either yourself or with some others. The best place to start is get on Discord, get in the test GRT faucet, get some GRT for yourself, and spin up a testnet Indexer and start being active within the testnet channel on Discord. This is the way that you will improve your knowledge. This is the way that you will meet myself Payne, Martin, the whole community. And this is the way that you’ll get recognized if you’re a sort of a net positive contributor in that space, attend the office hours on Tuesdays at 10am Pacific. The purpose of that hour is to originally it was to help Indexers with their technical problems That’s still the purpose. But we also talk about sort of high level stuff there a lot of time because it’s a chance for Indexers to get together every week. And then I would also suggest that, you know, if you’re brand new to The Graph, you want to look at some of the resources that are out there. There’s the codex.thegraph.com, which is sort of like a curated list of links for all types of stuff on The Graph. There’s also The Graph Academy. That’s thegraph.academy, which is run by Stefan and is a great resource for all types of stakeholders, including Indexers there’s some guides there for how to deploy on Docker and Bare-metal. And then of course, also look at the main website because you know, the core place that we used to send people before we expanded our sort of, you know, Documentation was the blog, where you’ll find some of the posts from one of the founders, Brandon, which will really help you understand the protocol at a fundamental level.
50:11
Yeah, definitely just be active in the community learn by doing and making mistakes on testnet, of course, before we jump into mainnet, but just head into it and see what problems are you running into, and figure out ways to overcome them. This is how I learned basically, I just experimented with things and just jump straight into it. And whenever I had problems, I tried to solve them by myself. If I couldn’t I then search the Discord. And then if I didn’t find an answer there, I just ask there and people will gladly help you, you know, we’re all here together to make things better at the end of the day, and also have the network the more Indexers, the better for the health of the network, you know. So you’re more than welcome to just come and try and see how well you doing. As for resources, definitely the Academy has, is basically the best place to get started easily can also use the official Graph docs for setting up an Indexer on Google Cloud with Terraform and Kubernetes. But I feel like Jim’s guide and our Stakesquid with guide are easier for someone that just is getting just started. I mean, you can ask Martin how hard it was to set it up back, it was fairly easy.
51:32
Definitely. This at least the Docker guide byte sticks with is super easy.
51:36
Yep. So it doesn’t even take that many resources, either you can run it on, like either eight or four core machine, just to get a feel for it, you know, you can basically get those machines for like, I don’t know, less than 30 euros a month, in some places, again, just to get the feel for it, but it’s not gonna be enough for production. I mean, also depends on how big your stake is. But yeah, I mean, it’s, it’s easy to get started, you just hit the faucet, first, get the role, you can just there’s a channel called roles, and The Graph section and the discard, you get the role, you hit the four set with your Ethereum address. And then you follow the guides on the Academy. And then you should be good to go within I don’t know, one hour at most, if you don’t have any clue what you’re doing.
52:26
I also want to give every member of the panel here the opportunity to give advice to another stakeholder within the community that’s members of the Delegator community. So maybe going around and giving everybody the opportunity, give some advice to Delegators. Let’s start with you, Martin.
52:41
For new Delegators. Of course, it’s jumping into the community. And there’s a specific video by Brandon that is very insightful on how to choose your Indexer. I would say start there on the existing or current Delegators. And then at work, I think it’s worth checking in with your Indexer. As you were asking before, there’s this question of like, if these are red flags, if they are not indexing all these across? Again, as I was saying before, it’s not a red flag. But it’s worth checking in with them to see and understand. Because this is the opportunity for you to get closer to your index or know, what’s their approach or strategy to the network. I think definitely that’s the main opportunity in this phase of the migration,
53:23
I would recommend putting some thoughts into who you choose to delegate to your GRT is safe, but as a 0-5% delegation tax. And if you were to undelegate your stake it has a thawing period of 30 days, so I would make sure that to put some thought into what Indexer you delegate to, or myself, I looked at materials like communicative, skilled, aligned with the ecosystem and have been low yet sustainable cuts, if peaceable, look into splitting your delegation stack between multiple Indexers. And yes, as Martin says contact your Indexer or your potential Indexer If there’s something you’re wondering about. And please join the Discord community, if you have any questions.
54:13
Just look for indexes that are active in the community and know what they’re doing. Because this is a way to like have a certainty that that Indexer will sustain their business and will sustain the rewards generation. And you’re basically maximizing our profits over time. If you just go randomly spray and pray on three random Indexers on graphscan, for example, sorted by APY, you’re not going to do well. Even if those APY’s are high doesn’t even mean they’re performing good. Most of the time. They just they just give out their own rewards as indexes to the delegates to attract more delegates, which is not a bad thing. In by any means itself, but that’s temporary, right? And you have to make sure that at the end of the day, at the end of the month, at the end of the year, you’re with someone that you can trust to do their best always, at every single moment in time to get those rewards, right? and those queries. And yeah, I mean, the only way to do it this by going with someone that is active in the community
55:29
I think that the guys have done a great job of answering that question, the only thing I would add is both for new Delegators and existing Delegators looking to maybe make changes to their delegations just to be cognizant of gas costs, and how much it’s going to, you know, the transaction to delegate to a delegate there’s going to cost you versus how much GRT, you are delegating to them. And there’s a great website called gasnow.org, which you can go to look at the current prices of gas on Ethereum. And I would strongly recommend that before you make any press any buttons on your favorite wallet, to delegate, check out the gas prices, and only do it when the gas prices are relatively low.
56:09
You can also go to oracleminer.com/graph. And if you click on any of the index areas, and you scroll down, you will have a table of how much it costs to delegate and undelegate and for Indexers also open allocation and cause allocation.
56:26
A follow up question then, or advice to Delegators would be this question about what should Delegators expect to see in terms of the query fees? Maybe I’ll go to you at that Jim. But what would be your advice to Delegators regarding that?
56:39
One thing that we haven’t really talked about is the actual content of the queries that we have on Mainnet right now. So we have a mix right now of the actual migration partners themselves, and stress testing from, from the Edge & Node team across all of those subgraphs. But there’s a mix of simulated queries and real queries depending on you know, whether the migration partners have decided to start testing on their subgraphs or not. So it’s difficult now to sort of start judging the amount of query revenue that is being generated. And there’s, you know, one of the reasons for that is what I’ve just described is all of that traffic is paid for via GRT. Via the micro payments, if you have your Scalar. However, you also have to consider that part of the mechanism of the network where Indexers get paid for these queries, as they have to make these settlement transactions onto the Ethereum network. And many Indexers will avoid doing that, because where we’re at with the network today, for example, the sort of traffic we’re seeing right now, the amount of GRT. And query fees that an Indexer is making might be less than the cost in gases to take in or claw back like query revenue that they’ve generated via serving queries. So if you were to look at, for example, you mentioned there network.thegraph.com. On that dashboard, there is a section there on query fees. And you’ll see per epoch how much query fee was claimed in that period. So it’s very important to know that that doesn’t necessarily mean that that’s what was served, there will likely be much more than that served in the period. But the Indexers haven’t necessarily chosen to claim it yet. Because the amount that they’d be claiming is too small versus the gas, it would cost them because they’re just being efficient about how they claim those fees. So the reason I kind of bring this up is because I think it’s too early to start judging Indexers by their query fee performance today, you will see that numbers start to increase, I think, over time, that GRT number per epoch, but on an individual basis, it’s too early to start, you know, ranking Indexers by the amount of query fees they’re bringing in. So I think it’s important to, you know, as we move through migration, that’s going to continue to be the case, until we sort of get to the very large scale of having 1000s of subgraphs on the network.
58:52
Jim, what would be your advice to anybody that’s just getting involved or starting to learn about The Graph and hasn’t really participated as a stakeholder at this point?
59:01
Sure, I mean, first thing is to sort of get a basic understanding of it from the top level. And the source I like to send people to is a YouTube channel called FINEMATICS. They do an excellent job of illustrating various blockchains and DeFi principles. And if you search their channel, you’ll find one for The Graph protocol. And it has a very good job of laying out to you how the different stakeholders play with each other within the protocol. Once you’ve had a look at that, my next recommendation is always to go and have a look at TheGraph.com. Specifically, it’s nice to look at the blog section because Brandon, one of the founders, you know, his blog posts have been sort of the linchpin for education from the very beginning with The Graph. So if you want to get into the details right there, you can also have a look at The Graph explorer there where you can see the wide array of subgraphs that are available. And finally, I would also strongly recommend that people go and have a look at thegraph.academy It’s a very rich environment with lots of documentation for all the stakeholders created by one of the community members, Stefan. And it’s a really great place to start. It’s lots of visual documentation around how things work and gets a lot of great feedback from the community. Those are the things I would recommend anybody else have an opinion on that?
1:00:20
I would love to recommend the recording of The Graph day keynote by Yaniv it is it gives an excellent presentation of our vision and what the protocol does. From a very zoomed out perspective.
1:00:34
I would add to join the community in any way that you think the viable or possible, the first step is basically jumping in one of our community channels in probably Discord server, or Telegram, but definitely feel from the communities that everyone is very welcoming. It’s not just somebody answering questions, it’s everyone helping everyone. And that’s the best place to learn, even or get directions if you don’t understand a specific concept. Besides, after reading all the links or content that were provided here by Jim, or Slim,
1:01:08
Yeah, this was one of the things that attracted me to The Graph, so much the community, you know, like everyone was just helping each other. And it was such a nice feeling place like I don’t know how to call it, it was just unique. I didn’t find any other project with the tighter community than The Graph. And also, thank during the testnet, everyone from the team even was very active and actively helping us because, well, it was a testnet, and we needed guidance. But now that we have people that know what they’re doing, and they’re domain experts for different parts of the ecosystem, they are now helping, and they are now in charge of this and everything is just super organized. And everyone is, yeah, the community is great. This is what I’m trying to get to like the community is absolutely amazing. And I think being active in the community is the easiest way to learn things. You just read the blog posts, like Jim said, and if you have any more questions, just go into any channel that you feel it’s appropriate to ask your question and then ask your question, because you will get answers.
1:02:18
So paying mentioned the strength of the community being such an important factor at The Graph. And some cool community efforts involve things like The Graph grants program, Martin may be going to you but what can you say about The Graph grants program?
1:02:33
That’s a great question. So The Graph grants program was kick started a couple of months ago, it definitely got a lot of attention and got over several 100 publications. And that has enabled a lot of support in different ways to different entities or people working across ecosystem and empower them to tackle different challenges. From there, we’ve seen protocol infrastructure work on automation. extractions, are also tooling on different like monitoring and alert, the different dashboards, that Graphscans, Graphlet’s, Grafana’s dashboards, even bots that have been or currently worked on to empower more community. So that’s where a huge impact from the foundation is through the grants program, as it’s enabling a lot of innovation across the different aspects. Allowing the tooling, and the protocol infrastructure, there’s a lot of work on subgraphs, there’s more data that needs to be organized and indexed. And also community will link to Payne was mentioning great examples are like The Graph Academy, it was mentioned a couple of times, but also there’s online education programs, or The Graphscan community, there is also a huge one, including this podcast. I think there’s also more to come. A good one happened recently was the POAP grant. And so they created over 20,000 NFT’s that will reward the different active participants in the network. And that’s definitely something that The Graph Foundation has been focused during the last couple of months.
1:04:04
I think the interesting thing for me is sort of observing this is to see some of these wave one grant applicants take their products right into fruition, right? So Graph Academy gets mentioned a lot, right. And there’s a reason for that raise very high quality is very well executed. And it has a real impact on the education for all stakeholders. And it has a real impact on how much impact one person on Discord can have in helping people. Right. So it used to be that you would go as a as a community member on Discord who helps people, you spend quite a lot of time doing repetitive stuff where you’re repeating the same, answering the same questions like where do I start? How do I choose who to delegate to? If I have this certain problem, what do I do, and you spend all your time doing that, you know, but with resources like The Graph Academy, you’re now referencing some high quality material elsewhere and a person can take themselves on that journey, right? To decide, you know, maybe what they want to do in the network or what they want to learn about the network. So we already feel the tangible benefits of wave one. That’s just one example. The POAP is another great example. I love POAPs, I’m addicted to pokes for some reason. They’re just NFT’s, right? But they represent something, especially in a community that’s so rich, like The Graph, and has such a rich history. Now, the NFT’s they really mean something to me. And you know, the fact that so much effort has gone into the design of them. So much effort has gone into actually executing the program. These are all things that have been funded by the community via the foundation. And we’re only going to see more and more of that, you know, I get to do interviews as a domain expert, where I interview grantees, and the stuff I see is just awesome. You see lots and lots of really cool stuff. And you know, I really looking forward to see what comes out of wave two.
1:05:55
What I saw from those grants was people that love The Graph now took advantage of these and started building and getting paid for building things. And everything is just going amazingly well. Like Jim said, The Academy is absolutely fantastic. Everyone loves it. This podcast, everyone loves it. And I know there’s a bunch of other tools that are being built right now. And I see everyone using them every single day. And yeah, I think once they, they get to release the updates and all that sorts of stuff, are these new tools going to be even more amazing than then they are. And most importantly, for the indexing side is the fact that turbo gas now supports trace, kind of, we still have to nail things out on it. But I think that’s the biggest impactful change in the lives of all the Indexers. I’m talking only specifically about this stakeholder group. But that’s one of the biggest things that happened thus far. And we’re making sure that it will go well, from this point onwards as well.
1:07:06
Yeah, working as a domain expert for demand process, I’ve noticed that not only do we have hundreds of applicants, we have hundreds of really high quality applicants that have dedication, and aligned with the ecosystem. I’m excited to see what will wave-two will bring.
1:07:25
So I’m interested in in sort of your perspective, as you’ve been doing the GRTiQ Podcast. So how many months? Have you been doing this podcast for now? Well, it’s been since March 26. March? Wow, it feels much longer than that. So I mean, I’d be interested to know what your experience has been in terms of interacting with the community both through your podcast, and then also Twitter, and maybe Discord, what’s your experience been as, as a guy who started with, I guess, limited knowledge and has now had the chance to talk to all these different community members,
1:07:58
I think my experience has been first meeting members of the committee has been the thrill of it. So I didn’t really know what the value of the podcast would be other than I was a curious person who wanted to understand The Graph better. And I thought the best way to obtain that knowledge would be to speak with different members of the community, and podcast grew out of that. But what I found is that, you know, I now have friends all across the world. And the community is so strong, and it’s like minded people who are really for the community. So I think the thing that I’ve learned first and foremost is there are real people with genuine interests and motivations to help build The Graph behind all of the activities you’re seeing. And even though the pushes towards decentralization, and the pushes towards DeFi and all these other really great things in the crypto space, it’s still just people getting up every day and doing hard work. And so for me running GRTiQ Podcasts, and the opportunity to meet these people and tell their stories. And that’s been the thrill. The other thing I would just add is that there is a real hunger in the community among Delegators, particularly for more knowledge and understanding. And The Graph is it is complicated and it is technical, but with the, you know, right approach that anyone can pick it up and understand it. And there’s a lot of people that are interested in doing so I’m tremendously humbled that the podcast is just one channel by which people are able to do that. But as has been mentioned many times today, The Graph Academy is an incredible resource for anybody who wants to take a self-directed approach, The Graphs official blog, getting on Discord, Telegram, all of these different channels provide easy access to get the information you want. And so part of the mantra of a decentralized world is self-responsibility, and taking responsibility for your own actions and your own acquisition of knowledge and The Graph Community provides many ways for people to do that. So it’s really on how much effort you’re willing to put in. That’s the real thrill of it. Everyone can participate.
1:10:09
Awesome. Thanks for letting me turn the mic on you. Thanks for being a good sport.
1:10:12
Sure, I’ll cut it anyway. I’m joking, Jim.
Guys, thank you so much for joining this panel and taking part in helping educate members of the community as we continue to move through different phases of the migration. Would you all come back and we can have another panel discussion? Glad to. Definitely. Absolutely. Yeah, for the perfect. Well, thank you so much for your time.
YOUR SUPPORT
Please support this project
by becoming a subscriber!
DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates. This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.
©GRTIQ.com