GRTiQ Podcast: Special Release: Anirudh Patel on The Graph as AI Infrastructure

On May 28, 2024, Semiotic Labs, a core developer team working on The Graph, released an exciting new white paper titled “The Graph as AI Infrastructure.” This paper details the upcoming launch of two new AI services to be built on The Graph: the Inference Service and the Agent Service. The paper also explores what makes The Graph the best place to service the convergence of web3, blockchain data, and AI. In addition to all this, Semiotic Labs also announced a two-week public demo of their ChatGPT-like AI product called Agentc, which was built using The Graph.

To help us better understand the details of the white paper, understand these new AI services, and comprehend the implications for the future of The Graph, I’ve invited Anirudh Patel, or Ani, back onto the podcast. Ani’s been on the podcast several times now, but he’s back for this special release and will answer these questions and more!

The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.

SHOW NOTES:

SHOW TRANSCRIPTS

We use software and some light editing to transcribe podcast episodes.  Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM. The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).

The following podcast is for informational purposes only. The contents of this podcast do not constitute tax, legal or investment advice. Take responsibility for your own decisions, consult with the proper professionals and do your own research.

Nick (00:13):

Welcome to a special release of the GRTiQ Podcast. On May 28th 2024, Semiotic Labs, a core developer team working on The Graph released an exciting new white paper titled The Graph is AI Infrastructure. This paper details the upcoming launch of two new AI services to be built on The Graph, the inference service, and the agent service. The paper also explores what makes The Graph the best place to host the convergence of web3 blockchain data and AI. In addition to all of this Semiotic Labs also announced a two-week public demo of their ChatGPT-like AI product called Agent C, which was built using The Graph. To help us better understand the details of the white paper, what these two new AI services are and how they work. And to comprehend the implications of all of this for the future of The Graph, I’ve invited Anirudh Patel, or Ani, back onto the podcast.

(01:08):

Ani’s been on the podcast several times now, but he’s back for the special release and he’s going to help answer some of these questions and more. And don’t forget to check out the show notes for links to all the resources Ani and I discuss as well as transcripts for our conversation. Ani, welcome back to the GRTiQ Podcast. Longtime listeners are already familiar with you. You’ve been here a couple times already, but I’m extremely happy that you were willing to carve out some time and come talk about this new white paper that the team at Semiotic Labs has just released. In the case that listeners haven’t heard some of the past Episodes with you, Ani, do you mind just quickly reintroducing yourself?

Anirudh (Ani) Patel (01:45):

Yeah. Hey Nick, great to be back. So definitely third time around here. So I’ll try to keep my introduction a little bit shorter this time. I’m a Anirudh, Ani, my background is actually as a researcher in reinforcement learning, which is a subdomain of AI and currently at Semiotic I lead our data services team, which is focused on releasing Subgraph SQL, which I believe by the time this podcast goes out, it should probably be in private beta. We’re launching it, guessing it should be out.

Nick (02:16):

Amazing. So Semiotic Labs releasing a beta version private beta of SQL, which a lot of people have been talking about, and as you mentioned there, Ani, yeah, you’ve been here a couple of times. I’ll put links in the show notes for anybody who wants to go back, but you joined for a special one-on-one, which was great. We got to learn all about your background and some of the things that you were involved with before you got activated and involved full-time working in web3 and at Semiotic. And of course we had a very popular Episode where there was a panel discussion on crypto and AI and we’ve come full circle here today.

(02:48):

So for listeners that don’t know, let me just provide a little context, Semiotic Labs has released a white paper that explores The Graph AI and some new services related to AI that are going to be coming to The Graph and a lot of people are excited and I think there’s a lot of noise in the industry about AI, so it’s super fun to see The Graph not only step forward through this white paper, but also to say something very tangible and specific. If you don’t mind, Ani, just take us behind the curtain a little bit here and tell us where did the idea for this white paper come from?

Anirudh (Ani) Patel (03:22):

Actually the last time I was on, we talked a little bit about agency, which was a sematic effort to use large language models or LLMs to create a tool that used natural language as an interface to analyze. At the time it was just DeFi data. We had some thoughts about expanding it further. In that Episode, one of the goals that I’d stated was actually that we wanted to use agency as a learning experience to bring a lot of these things back onto The Graph and so we’ve already started working on part of that vision with Subgraph SQL. Part of this tool was querying SQL databases so that we could actually get the data that we wanted to analyze using AI, but we’re still missing the AI piece and this white paper is an effort to start to close that loop.

(04:03):

So after Subgraph SQL and the AI services are launched, any developer will have the ability to launch AI applications that leverage both The Graph’s data and The Graph’s computer resources. And at a high level, that’s very much a key takeaway from this white paper for me. We want people to see The Graph not only as a data marketplace but also as a compute marketplace.

Nick (04:27):

In essence, what you’re saying here is that The Graph is evolving past just being a data provider to being able to run and users will be able to deploy AI models using The Graph. So again, I’m just going to assume that there’s listeners that haven’t had the chance to go through the white paper quite yet. In simple terms if you can, just give us a brief overview of the white paper and some of the main themes that you and the team focused on in what you wrote.

Anirudh (Ani) Patel (04:52):

We divided the white paper into two pieces, so there’s the main text and there’s the appendix and they’re targeted towards different people, which is why I wanted to split these up in my answer. The main text is really targeted towards the general audience, so this is anyone in the crypto space who is aware of The Graph. If you’re in that category, you should be able to understand this section. And so in it, we present the idea of The Graph as being useful for AI in two ways through its compute providers and through its data, and we discuss how we can use these pieces to build two different AI services, the inference service and the agent service. We also talk about how The Graph’s data from Subgraph’s to Geo is useful for training and fine-tuning models, but that’s not something that we’ve planned to natively support using through either of these AI services, at least to begin with.

(05:37):

Another thing we talk about a little bit is the risk of a compute provider running a different model from the one that you paid them to run. So there is an aspect of verifiability that we care about with decentralized AI inference, so we dedicate some time to that. The appendix on the other hand, is a lot more tailored towards developers and people interested in really applying AI to their own projects. It’s a lot more technical and it also talks through how we believe crypto and AI can evolve. We specifically focus around UX problem of crypto. UX being sort of this I think generally agreed upon problem that crypto has, but we use that lens to sort of talk about the AI services in a variety of different ways. We talk about retrieval augmented generation or RAG, we talk about knowledge graphs. It’s not really a survey paper, but we do cite several papers that you can read to dive deeper into these various subjects as well. It’s just a good reference.

Nick (06:38):

And as listeners know, I am non-technical and I’ve had the opportunity to read through this white paper and I found the experience to be exactly like you described there. The first part, I understood it, I was able to navigate it and I did go through the appendix and it was more technical, but it was also very educational. I had to stretch a little bit to understand some of those concepts, but I learned a lot about AI by virtue of reading the complete white paper. Another theme that comes up in the white paper, and you talked a little bit about it there, is that The Graph is best suited or well-suited to be a service provider for AI. Can you double click on that a little bit and explain why that might be true?

Anirudh (Ani) Patel (07:15):

Yeah, I’m going to break up my answer into three pieces. So firstly, I think The Graph is well-suited for these AI services because The Graph has compute providers, right, we call them Indexers currently. Right now Indexers are using their compute to serve GraphQL queries with Subgraph SQL, they’ll be able to use their compute to serve SQL queries. AI inference running AI models is really just another form of compute. There’s no reason that they can’t run AI models as well, and these Indexers exist today. They don’t need to be bootstrapped from scratch. And so this is to me a major advantage over potential competitors, which is that if they want to do this sort of decentralized inference, they’re going to have to bootstrap a lot of this infrastructure, a lot of this community that The Graph has from scratch. In a similar vein to my argument about bootstrapping Indexers, The Graph also already has a set of economics that govern how compute providers Indexers interact with users and how Indexers can tell compute providers what information, what AI models, what data they need.

(08:15):

This is another thing that any other decentralized inference provider would have to build up from scratch. The next reason I believe that The Graph and AI is a great partnership is just that the fact that The Graph has data and AI really needs data. So one of our goals, you’ll see this with Subgraph SQL, is we’re going to make it as easy as possible to use The Graph’s data for AI. And again, just to call back a little bit, if you remember agency, it’s kind of like a North Star here. It’s this combination of AI and data that’s really powerful. There are a lot of other people that can do decentralized inference. There are very few people who have the ability to cover both the data and the inference spectrum. And then my last point here is just that The Graph is a decentralized protocol. This gives users the ability to run AI models without fear of censorship, without fear of some centralized provider going down and taking their model down with it. There’s really no fine print you have to read with The Graph.

Nick (09:10):

You’ve mentioned a couple of times now these two services that are proposed within the white paper, inference and agent. Can we get a little more specific there? And again, assuming listeners haven’t quite made it through the white paper yet, what are inference services and what is meant by inference services and agent services?

Anirudh (Ani) Patel (09:29):

Inference is just jargon for running AI models, right. So whenever you’re calling, whenever you’re using ChatGPT, you’re typing some texts, you hit enter chat, GPT is running inference, and so the inference service just enables Indexers to serve AI models on The Graph. One thing I do want to touch on a little bit is that we do actually expect to see some specialization emerge here. So some Indexers will be experts in using their compute to serve data queries and some will become experts in using their compute, you know, they’ll attach GPUs to their compute, for example, for AI inference, and we expect this specialization is going to drive down prices and latency for the inference service.

(10:08):

And the agent service is really, I mentioned before how it’s the combination of AI and data that makes The Graph so powerful and unique. The agent service is trying to take advantage of this. So with these specialized Indexers for AI and specialized Indexers for data, we’ll have low cost, low latency inference and low cost low latency data, and the agent service is going to let you tie these two things together in a way that’s more native to The Graph to really make creating AI dapps as easy as possible.

Nick (10:36):

So we’ve got inference service, we’ve got agent service, and I appreciate the introduction there. Let’s go a step further here, Ani, and talk about what that looks like. So how should we envision the ways in which a developer will come along and use The Graph for inference service specifically?

Anirudh (Ani) Patel (10:54):

So we’re pretty early in the design process here, so I’m going to give you a bit of a preview about some of our thoughts about this, but this is by no means like a settled question for us, right. And so as a developer, the way we envision things currently, if you want to use the inference service, what you’re going to do is you’re going to push your AI model onto IPFS. These could be models that you find to yourself, or these could be sort of general purpose open models like Mixtral, Indexers can then pick those models up, a user would send a request to an inference gateway to run a particular model against specific input, and the gateway would return a [inaudible 00:11:28] to the user, which the user can use to grab the results of the computation once it’s done. We do realize, especially with LLMs, people might be interested in streaming outputs. That said sort of that the jobs-based thing that I talked about is a bit more general to a different types of AI models. So we’ll start there and we’ll think a bit more about streaming in the future.

Nick (11:47):

Ani, and if I’m not mistaken, the white paper also says that open general purpose models will be coming to The Graph, and so in some ways it’s not like a wait and see if developers come as part of this AI service, Semiotic is going to add open general purpose models, do I have that correct?

Anirudh (Ani) Patel (12:04):

That’s a great point. So we do want to seed the inference service with several models in advance. In all likelihood, Semiotic, we have expertise, we have a background in AI, we’ll probably serve as an AI Indexer as well. And so we want to give the community a good starting point to where the inference service is useful immediately at launch. Part of this is going to be deploying various open models, but part of this is also going to be, I’ve talked a bit about agency, taking agency open sourcing its code and using that to demonstrate the transition of moving an AI application within the crypto space to be built on top of the Subgraph SQL data service and the AI inference service. And so this will be a very concrete example and it’ll also be useful. I think people are generally interested in natural language to SQL. It’s an explored topic and you see people within the crypto space and within the web2 space working on this problem.

Nick (12:57):

Totally agree, and I love the vision there for agency. That’ll be fun to see happen. I want to then do what we just did with inference service for agent service. So can you describe or explain how developers would use The Graph to run an agent service?

Anirudh (Ani) Patel (13:13):

So we’re a lot less certain about how best to structure things for the agent service. So let me actually back up a step first and just make sure we’re all on the same page. When it comes to defining what an agent is, because there are multiple definitions, I see a few definitions running right within the crypto space, there’s a definition we use in reinforcement learning, and there’s a different definition that roboticists use. So the definition we’re using is the one that roboticists uses, which is basically that an agent has some subset of computation, the ability to perceive the world, the ability to act within the world, and then the ability to communicate with other agents. So those are four pieces. AI, if you think about it’s the computation piece, it’s the brain of the robot, but all this other stuff, the ability to act within the world or the ability to perceive the world within the crypto space, this really requires leveraging some of the other data services that The Graph has and that The Graph plans to have.

(14:08):

Right. So from the perspective of the ability to see the world, we can think about agents as including Subgraph SQL, GraphQL, whatever we need to actually get blockchain data, Geo would also we include in this, whatever we can use to get data into the agent to the brain so that the brain can then do some thinking. Then on the other side, The Graph has plans to release an RPC service, and so you’ll be able to use this eventually to actually execute transactions on the blockchain.

(14:36):

And so the agent service is really seeking to tie all these components together in an interface that makes it easy for users to create new AI applications. And what we sort of envision is that people will start by creating very simple agents. We’ll create a few based on agency as well, and then people will use these as building blocks to build up more and more complex agents until the point to where we can really see some truly complex applications deployed on the blockchain. Tied out through all of this, if I’m not making it clear enough, is The Graphs data. It’s going to be The Graphs, RPC service and The Graphs AI services.

Nick (15:10):

That’s a great description, Ani, and I think when it comes to how excited I am about reading this white paper and these AI service is agent service is really interesting to me ’cause I think there’s some really interesting things that will emerge as people learn to interact and use agent service. Can you give us just a hypothetical or a couple hypothetical use cases that you think are ways people will use the agent service on The Graph?

Anirudh (Ani) Patel (15:34):

Sure. So as an example, one of the things that I’m pretty keen about within the web3 space is sort of the UX problem that we have. I see this as one of the largest barriers of entry for web2 people to actually enter the web3 space. And one of the things that’s very confusing for new users is when they enter and they don’t understand, “What should I use to query data? Or, “Where can I go to do this thing or that thing?” One of our hopes of The Graph is that people will start to really upload a lot of their documentation onto Geo, and this will sort of the Geo pairing with the inference service is one example of an agent where users will be able to ask her ChatGPT esque questions, try to get information about different crypto protocols through documentations, how to interact with them, what their tokens are, et cetera.

(16:22):

And I see this as an example of lowering the barrier to entry of creating an agent that lowers the barrier to entry or web2 users to enter the web3 space, which opens up the market for all of us crypto people, so I think that’s a win all around. So this is one example of an agent. To me it’s sort of this knowledge enhanced large language model, which is what’s called in literature. This could be via RAG, retrieval augmented generation, or this could be via knowledge graphs like Geo. Another example of something that really excites me with the agent service is the applications sort of to build a top of agency to extend agency beyond where it already is, where we have this piece where you can query using natural language and plot and analyze using natural language, but there’s the second piece that’s missing. It’s the piece of, okay, well now I have all these insights.

(17:09):

How can I use these insights to then execute some actions? I want some notification or an alert of some type of a certain event occurs or I want to based on that alert, trigger us on transaction on the blockchain. I want to swap token A for token B. So these are all sorts of things that are examples of agents. You have the natural language, the SQL agent, you have the SQL or you have the data to plotting agent or the data to data analysis agent. You have the alert’s agent, you have the swapping agent, and again, we go into this in much more detail in the appendix and I think there’s also a sematic demo knocking around somewhere that we can share.

Nick (17:51):

People who have been around The Graph ecosystem for a while. And again, those that have listened to prior Episodes with you and members of the Semiotic team know that AI is already being leveraged and used at The Graph and you outlined it in the beginning of the white paper talking a little bit about agency and [inaudible 00:18:07] these are two AI-driven tools that have already been deployed within The Graph or leveraging The Graph. I just want to ask kind of a background question which is, did the development of those types of things and the work that the team was doing there sort of prime the pump for this white paper and these two new AI services that are going to come out?

Anirudh (Ani) Patel (18:30):

Yes. So I’ve talked a lot about agency already and how this was our motivation, our inspiration for creating this white paper and for creating the AI services. I also want to point to the fact that really at Semiotic we have the expertise to not just build on agency but also more broadly from our backgrounds to actually launch a pretty successful service here. I talked a little bit about my background being in multi-agent reinforcement learning. I’ve been doing research in that field for the past six, seven years. Sam Green at Semiotic, also backgrounded in reinforcement learning. We have a group within Semiotic currently that’s just constantly working on new AI problems within The Graph space, but we also have expertise in the data aspects. This is why we have the ability to focus on things like Subgraph SQL ’cause we have engineers that have been doing SQL or writing SQL queries, working with SQL data pieces for 20 plus years. And so I think this combination for us is really why we see this as such a promising area for Semiotic to really contribute to The Graph going forwards.

Nick (19:34):

Ani, when you think about the implications of this white paper and the AI services coming to The Graph, how do you think these developments position or reposition if you will, The Graph as a leader in the intersection of AI and blockchain web3? I mean, how would you think about that?

Anirudh (Ani) Patel (19:56):

I think I said at the very top that my key takeaway for people from the white paper is this repositioning of The Graph as a data marketplace to a compute marketplace, and I see this as sort of the AI services that we’re proposing is the first step in doing this. We have people, we have Indexers, we have this community, we have the economics, we have the team with the expertise to actually accomplish this vision. To me, it’s really just about starting The Graph down this path where we start to expand not just in the world of data services direction, but also world of computer services direction. Like I said, this is sort of in my mind the first step on that path.

Nick (20:36):

Ani, as you think down the line here, let’s imagine that the AI services are deployed, people are coming to The Graph, they’re using it for AI, what does success look to you in that environment? What’s your vision?

Anirudh (Ani) Patel (20:49):

Part of my vision is what I just described, that the AI services, it’s a bit tautological, but the success of the AI services should drive out more compute services on The Graph to really leverage the full scope of what all of our compute providers and what our community can do. But more concretely, I would love to see to start with just dapps, many user facing dapps that are using these AI services. And again, the goal in my mind is really to lower the barrier of entry as far as possible to get people maybe without the AI background to deploy or create one of these models themselves to get them using this technology and thinking about it.

(21:26):

But my eventual goal is also that I want web2 applications to really see The Graph as the cheapest place to do AI inferences to create these AI agents. And so to me, that’s sort of what success looks like. It’s this stepping stone process of first web3 applications. We see widespread dapp adoption because we’ve lowered the barrier to entry, this brings down prices, which brings web2 apps into play, and then people truly see the potential of The Graph and we start to build out some more compute services. And I really think this is going to just drive up a query volume both in terms of The Graph’s data and The Graph’s computes ’cause again, AI needs both.

Nick (22:06):

I want to talk about next steps then, and I know it’s difficult, but what should we expect in terms of next steps in seeing these AI services on The Graph?

Anirudh (Ani) Patel (22:15):

I mentioned before that the AI services are going to be job-based. So Semiotics data services team is currently working on creating a job-based data service framework. It’s a bit of a mouthful. That’s what we’re calling it for now. We’ll workshop it. We’re going to use this to then create these AI services, and at least in my mind, currently the plan is pretty similar to what we’re following with Subgraph SQL. We’ll start with the private data just to identify bugs and to better understand how the community wants to actually use these services so that we can tailor them a bit better to accomplish not what we think people want, but what people actually want.

(22:49):

I’ll create a Google form actually that we can link maybe in the description here for people who might want to participate in that private data and we’ll ask a few questions. You can participate either as a compute provider if you’re interested in that or if you’re already an Indexer or just as a tester, as a user. And of course, participating in the private data gives you a voice in its development. So that’s maybe my call to action. That’s probably the best way to get feedback.

Nick (23:13):

And as you said, I’ll put a link in the show notes for anybody that wants to sign up and take a look at this and get involved. You’ve said a couple of times already a little bit about this beta related to SQL. Before I let you go, can we shine a little bit of a light on there? What’s going on with SQL and how can people participate in that private beta?

Anirudh (Ani) Patel (23:31):

Yeah, so with Subgraph SQL, we’ve recently launched the private beta, which we’re extremely excited about. To be very clear, the private beta is meant to be an MVP. It’s there for us to identify bugs in our graph node implementation just so that we know that in case we miss any edge cases in the SQL aspect of it. And also to just sort of understand how the community wants to use the Subgraph SQL data service that we can tailor it in the future to be more suited to what the community wants. With Subgraph SQL, I talked a little bit about the jobs-based data service framework earlier. SQL should also be jobs-based, the reason for this being that you have some queries that’ll take days to run and some queries that’ll take minutes to run, so it’s a lot better if Indexers are just stuck running this query for you indefinitely and not able to receive any GraphQL queries or any simpler SQL queries in the meantime.

(24:22):

And so this is a piece that we’re going to need to implement. So what I’m trying to say is that the jobs-based data service framework is not only useful for the AI services, it’s also useful for the Subgraph SQL data service and because it’s a framework, if any user wants to also launch a jobs-based data service on The Graph we’ll make that possible. For people who actually want to participate in the private beta. We are keeping it to a pretty exclusive list just because we have very specific questions that we want to ask, but if you are extremely interested in participating, you can reach out to me on Discord and I guess we’ll put my handle in the show notes and we can have a chat about whether that makes sense and you can participate either as an Indexer or as a tester, but again, we’re being pretty selective in who can participate here on purpose.

Nick (25:11):

A lot of things happening at Semiotic. We’ve got SQL private beta, we’ve got this AI white paper and soon AI services on The Graph. I think the last question I want to ask you, Ani, before I let you go is, as you look ahead, what’s your long-term vision for The Graph and its role? And I think it’s an evolving role probably in this sort of web3 AI ecosystem that’s emerging alongside this entire emerging industry.

Anirudh (Ani) Patel (25:41):

I didn’t really talk about it here. I focus a lot on the main body of the white paper, but personally as a reinforcement learning researcher, as an AI expert, as an engineer, the appendix is a lot more interesting. In the appendix, we sort of talked through how we believe that we can crypto and can benefit from having this essentially like a hierarchy of agents where we can have multiple agents across every part of crypto that execute different actions, that understand and read different data that talk to each other. Like I said, the details of that are in the white paper. I can’t really talk too much to that here without getting very technical, but this is how I envision AI evolving in web3, and it’ll be something very unique to web3 as compared with web2 specifically because the blockchain is all compute-based, right.

(26:32):

In the web2 world, oftentimes, I have to physically go to a place I have to physically talk to someone. There’s still a lot of face-to-face interaction. In web3, we’ve really managed to abstract a lot of these out. We use smart contracts and it really makes sort of a playground or reinforcement learning agents or for agents of any kind to interact with each other. And this was actually what first attracted me to the web3 space. In the appendix, we lay out this vision for how The Graph is sort of this key central piece that’s missing if we actually ever do want to enable this internet of agents. And so what’s my long-term vision? I don’t know how long-term this is, but that’s sort of my ideal. That’s my goal. That’s where I want to see us go, and then we’ll see. By that point, I’m sure I’ll have another more ambitious vision to take over.

Nick (27:20):

As I said, I’ll put links in the show notes to everything we talked about today. I want to encourage any listeners that are interested in learning more about these AI services to visit the show notes. You got to read the whole white paper and you can do it, don’t worry if you’re non-technical, there’s plenty of content there that you’ll understand and grasp. Ani, congratulations to you and the Semiotic team, and thanks once again for joining the GRTiQ Podcast.

YOUR SUPPORT

Please support this project
by becoming a subscriber!

CONTINUE THE CONVERSATION

FOLLOW US

DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates.  This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.

©GRTIQ.com