GRTiQ Podcast: 200 Ram Kumar

Today I am speaking with Ram Kumar, Co-founder at OpenLedger, a platform building specialized AI models through decentralized data collection and attribution.

Born and raised in Chennai, India’s thriving tech hub, Ram has been actively involved in the web3 space since 2017. Ram’s journey from building web3 solutions for enterprises to founding OpenLedger was driven by his observation of data bottlenecks in AI development. His vision for specialized AI models that can power purposeful agents while ensuring fair attribution to data contributors showcases his commitment to making AI development more accessible and transparent.

The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.

SHOW NOTES:

SHOW TRANSCRIPTS

We use software and some light editing to transcribe podcast episodes.  Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM. The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).

The following podcast is for informational purposes only. The contents of this podcast do not constitute tax, legal, or investment advice. Take responsibility for your own decisions, consult with the proper professionals, and do your own research.

Ram Kumar (02:06):

I think, future of AI is going to be big, as I told you, like how electricity changed the world. It’s a general purpose innovation. Very similarly, that’s how AI is going to be.

Nick (02:44):

Today I’m speaking with Ram Kumar, co-founder at OpenLedger, a platform building specialized AI models through decentralized data collection and attribution. Born and raised in Chennai, India’s thriving tech hub, Ram has been actively involved in the web3 space since 2017. Ram’s journey from building web3 solutions for enterprises to founding OpenLedger was driven by his observation of data bottlenecks in AI development. His vision for specialized AI models that can power purposeful agents while ensuring fair attribution to data contributors showcases his commitment to making AI development more accessible and transparent. I started the conversation with Ram by talking about his background in Chennai and how the accessibility of web3 has enabled talented builders from India to make significant contributions to the industry.

Ram Kumar (03:36):

I’m originally from India, but I almost travel every other country. That’s how it’s in web3 nowadays, right? I spend a lot of time between SF and India. Chennai is the place where I was born and raised. These are my current two cities. Interestingly, Chennai is home to a lot of tech entrepreneurs. Sundar Pichai, the CEO of Google, is from Chennai. Aravind Perplexity’s… he’s from Chennai. Sreeram Kannan who’s from Eigenlayer, he’s from Chennai. It’s a good place to be coming from. Yes, this is my city.

Nick (04:10):

As someone from India, has it been either surprising to you or did you suspect this would be the case that so many founders and builders in web3 are emerging from India?

Ram Kumar (04:22):

India has been always borrowers of talent. I guess, the exposure and liquidity has been always an issue in terms of funding. Crypto actually breaks that. It’s a place where anyone from anywhere could come in. And if they have talent, if they can get the right opportunities, they’ll be able to shine. That’s what we see happening in crypto nowadays. You don’t need to know someone to know someone to raise money or get funding. It’s not a big surprise that there’s a lot of Indians in crypto right now, and I’m glad that it’s happening. I’m glad that we’re able to make a change in this industry.

Nick (04:58):

Take us back in time. When did you first become interested in technology, and do you remember either that moment or that experience that lit that fire for you?

Ram Kumar (05:09):

I belong to this Millennials era. We’ve always seen tech evolve as we grew up. I remember my first time we had a computer in our home was when I was in sixth grade, and we had this dial-up internet. I’m sure that everyone in my age can relate to that. I had to wait for that to get connected so that I can actually have a conversation with my aunt, who was living out of India, in Canada. I almost talked to her every day during that time, because I used to learn from her what’s happening in other countries, because technology was much more ahead. That was my first foray into understanding tech. I used tech to understand about tech, and it is quite interesting. Other than that, I think games was always a big deal for me. Playing games on PCs was the biggest movement for me every day. Yeah. I think these two are the biggest connections for me to get into tech.

(06:09):

I always been fascinated with mobile phones and how it revolutionized the app era. Before crypto, before 2017, before I got into web3 space, I was more into apps. I felt that it’s one of the biggest revolutions that happened post-internet era. I’ve been always fascinated by how phones have been this exocortex of our lives. Exocortex is a rented brain, where you use another device to optimize how you’re performing your everyday task or your needs. Mobile phones have been a true exocortex for us. Today, if we take a mobile phone out of your hand, you would not be able to function. Even though it’s sad in a way, but it has really helped us in a lot of ways, guiding people and letting us know so much about life. In fact, been life-saving in so many situations as well for many people. I’ve been always fascinated with tech and what it can do to improve people’s lives.

Nick (07:11):

You mentioned that you got active and interested in the crypto blockchain space in 2017. That’s quite a while ago. Talk to us about that transition and how you got interested in crypto.

Ram Kumar (07:24):

Yeah. I’m a bit old. 2017 is interesting. I’ve been telling this to a lot of people that I’ve spoken. I got paid in Bitcoin. Myself and my co-founder, we got paid in Bitcoin and saw the tech behind that and saw how amazing it was that you could pay someone across the globe in few seconds. And then saw there’s a need for more devs to participate in this ecosystem because there were not many people who were building, that led us to build an R&D company. It started very small, maybe 10 employees, initially, along with us. And then did some interesting work, worked with a lot of protocols, grew to about 200 plus employees. I had a good experience with that, and we still contribute to that on the side. We’ve worked with Polygon, Hedera Hashgraph. We’ve worked with recent enterprises like Aptos and many more blockchains out there.

(08:21):

We’ve also got an opportunity to work with traditional companies who wanted to migrate to web3, so had a chance and opportunity to work with Walmart India’s subsidiary, FEUP Guard, Cadbury, Pepsi, Los Angeles Times, and many more traditional businesses. We had them launch web3 initiatives. It’s a very interesting learning curve for us and that taught us a lot about various needs of the industry, how the industry is a four-runner and a lot of stuff and also is behind on many things compared to the traditional industry. We thought data was a big mortal net. Data was never on-chain, and we could not do much with data on-chain, so that’s what triggered us to do something around data and AI, that what led to OpenLedger.

Nick (09:11):

Going back just a little bit to the experience you had in 2017, you mentioned that you were working at the time and you were being paid in Bitcoin. What was it beyond just the fact that you were able to get paid globally, I guess exchange funds globally? What was it about the tech that caught your eye? What did you see at that time?

Ram Kumar (09:31):

It just made anyone across the globe to permissionlessly get involved and start building products. Back 2017, the only Ethereum had was that you can launch tokens on top of that. That is a bit crazy. We saw a lot of ICOs come in. Even though there was a lot of scam, there were quite a bit of interesting projects which used token as a utility to encourage people to participate. We’ve seen projects where people use token to let people caught up with data or use token to make them do an activity and stuff like that. That is quite interesting and we’ve all seen the DeFi Sun, okay? I think DeFi was a great way of bringing finance to this industry. That was quite good. While we were watching this nicely boom and all that, only thing that was worrying was that there was not much utility as I told you.

(10:27):

So when DeFi happened, that was a perfect use case because Bitcoin was for payments, and there was nothing much happening post existent. People were mining Bitcoin and paying each other, but that payment also was not micropayments. Where it was interesting that we could see DeFi involved on one side where there’s not payments, but it was at least people couldn’t figure out ways to bring in banking system to web3. We’ve seen all the innovations that happened during that phase as well. It’s a very interesting concept that boomed big and still has been one of the biggest utilities in crypto. So yes, I would think that’s one innovation that we saw happening in front office, and it was fascinating.

Nick (11:12):

I want to ask you this follow-up question about crypto and DeFi, and there’s been this argument that when it comes to product market fit and the crypto industry, it really has only founded with maybe DeFi. I know there’s arguments that go beyond that, including gaming or AI and things like that. But how do you come in on that? Do you see crypto enabling a DeFi world and then everything else is a little bit of an experiment and wait and see or something different?

Ram Kumar (11:42):

I wouldn’t say exactly that, but I think we should see this in a different angle, right? One thing, for good or bad, crypto has always been… Or web3 projects have been always associated with the market volatility. The retail have always associated with crypto intellect making money. They’ve always fascinated and loved the concept that if you involve in a project, you will probably able to make some additional revenue. They like the fact that you could trade and you could do some DeFi activities to earn additionally. I think that’s what has really made people see crypto always as a derivative forms of multiple versions of DeFi, gaming to be like GameFi because they want to bring in some elements of DeFi because DeFi was so huge that the retail wanted that, so they bring in elements of trading and GameFi and stuff like we’re trading the assets and stuff. We’ve also seen that evolve a bit to NFTs. NFTs are majorly trade. And then we’ve all also seen that evolving slightly into AI. That’s the mode that has been traveled over.

(12:49):

In a way, it’s good because you see liquidity happening. You could fight against larger traditional projects out there because you have liquidity here. It gives you the power to push forward and do some innovations. On one side, if your project is not DeFi worthy, if you have a bunch of scientists working on something which is innovative and they don’t want to spend time on bringing in DeFi element, because it’s totally unnecessary, then the project is not really spoken about. I’ve heard and I’ve seen many projects which are really innovative, and they’re passionate about what they’re building, but they don’t have a crypto angle or a token angle to it, and no one really cares about those projects.

(13:26):

So that’s the negative side of the spectrum, but I guess, we all have to balance and we have to all figure out ways where we can do innovations and also have certain elements which are fun for the retail, for the larger masses to participate.

Nick (14:43):

Let’s go back then to those early conversations that you talked about just moments ago that led to launching OpenLedger. You were talking about some insights you got related to data and the nature of data being a little bit of a bottleneck for conventional or traditional, I guess, companies making the migration into web3. Can you go back there and just give us a little more information about the origins of OpenLedger and what the vision was that you were exploring?

Ram Kumar (15:11):

When we initially started, right? So we were exploring of how we can bring data on-chain. We were exploring how we can have a database on-chain and stuff like that, but we found out that not many people really cared about… They would partner with you. They would do some labor partnerships just to bring the brand name out, but it’ll not be really a long-term usage. They would just try to experiment with it. The major reason why is because you are already having users and the transactions on-chain. You don’t want to have more data on-chain because it’s just going to cost you more money. So we realized that as we were building a product and moved towards more data and AI, because data is one of the biggest bottlenecks in AI. We saw that there was a natural lead. We had a lot of interest coming in from AI projects to work with this.

(16:05):

That’s where we also figured out that there would be a need for specialized models in the industry. AI was moving. Everyone was moving away from just using AI for information, seeking to much more serious and product to use cases. As that was happening, we also figured out that if AI is going to be a general-purpose technology and it’s going to be used in multiple places in your office, in your work and when you buy for groceries and stuff like that, then you need to have a multimodal system where you don’t use this one LLM, one large-language model, that powers all these current products like OpenAI, ChatGPT or Llama, which powers Meta’s AI apps and other AI apps out there. Most of them use either between this Llama or OpenAI’s model or it uses open source like MySQL and all that. These are large-language models which are good enough to give you the context.

(17:06):

It just gives you a general knowledge of the context. It’s very widespread, but it’s not specified to a particular domain. You could do a rag, but it’s not very efficient and that is what led us to think whether how we can build specialized models which are fine-tuned for a particular purpose, for a particular need. They can work along with an LLM, where these specialized models not only perform on their own. They work with an LLM to get a context of the query that is asked. It could be a query or an input that comes in, and then they use the specialized models to actually execute the job and give it output. We saw this was happening in AppStack publicity and all that, where they use a large-language model, like an OpenAI’s model to understand the context of the internet-script data. They go ahead and use another specialized model, let’s say Afee model, to go ahead and answer the math question.

(18:01):

So we saw this happening, we saw this trend, and we knew that this is what is going to be the feature and started building out this product where we could build a specialized model on top of this and you start with collecting specialized data. Basically, proprietary data of a particular domain. It could come from a business or it will come from a subject matter expert. Once the data is collected, you can then fine-tune a model with that, and then you could let that be available in our platform for any apps or agents to infer. What additionally you also did is, we could also provide that data as a vectorized embedding sticker, [inaudible 00:18:37] RAG, so that you can use any base model and then attach specialized RAGs to that. So that’s what we build in OpenLedger.

Nick (18:47):

As you’ve built OpenLedger and you’ve… Obviously, driving and focusing in on this AI /blockchain narrative that I would argue caught the attention of the blockchain industry within the last call it year or so. But as you’ve been building in that space and seeing, I guess, competition arise and the narratives throughout the industry arise, what are some of the observations you’ve made? Is there huge demand for this? Is the narrative becoming noise in the industry? How do you make sense of all of it?

Ram Kumar (19:19):

As I told you before, from ICOs to the DeFi era, to the NFT boom, we’ve always seen noise coming in. When there is a new innovation happens in the industry, there is a lot of noise that gets generated. And then after a certain time, all the fluff is gone and then you have projects that are the real deal. It’s very natural. It’s not like crypto industry is the only thing that is there, which this thing happens. You always seen this in the traditional industry as well. We’ve seen this with the dot-com bubble. We’ve seen this in the current AI scene that is happening as well. There are so many. It’d be hard to count, 10 thousands of AI agent companies just in Silicon Valley doing very similar works.

(20:05):

It is very natural for humans to get an order that something is booming and try to experiment with that and some of them might click, some of them might tie down. I think that’s what is happening with the web3 industry as well. I think that’s absolutely fine. If a project is solid and your intentions are good, you would be able to stand out from the community, and you would be able to last longer compared to the competition. It’s good to have competition. You learn from each other. You know what is working. You know what’s not working, and it just creates more and more better and higher awareness to the entire ecosystem.

Nick (20:39):

I’ve had other guests on the podcast that are exploring this AI and decentralization, and there’s a whole bunch of really brilliant people like yourself working on it. When you think about OpenLedger within a competitive set of people that are exploring these things, how would you describe what makes OpenLedger unique?

Ram Kumar (20:58):

There are aspects that, I guess, we are working, which are not many people have really concentrating on. Attribution, a lot of people talk about it, but the way we do attribution is a bit different because attribution is very important, which is a centerpiece of our product, because we will specialize models. The biggest moving point here is the data, and the data contributors have to be benefited when these models make revenue, when these models are being used. For us, the first thing that we would implement when we go live is to make sure that attribution works, and all three aspects of first website. We can collect data, we can build models, and we can let apps use these models. All of this has to work together for attribution to really kick in and make sure that rewards is distributed back to the data providers. So I guess, that’s how we are a bit different from other projects.

(21:52):

AI spectrum is so big. There are data labeling companies. Some people mistake that we are competition data labeling companies, which we are not. We don’t do data labeling. We collect data. We could work with data labeling companies. Other companies like Grasp, which does data collection, which is one part of what we do. We can collect data. We could collect data ourselves. We could collect data from Grasp as well. So I wouldn’t say there are direct competitions to us, but there are people who do certain things in what we do, but we’re trying to build a larger ecosystem where we would’ve to collaborate with all these people, and help them build on top of this and also build together with them so that ecosystem grows.

Nick (22:33):

One of the things that, when you’re talking about AI, everybody’s interested in is AI agents and I think we’ve seen a few of these out in the wild, and people are very interested in them. When you pair this story or vision of what AI agents will do with what you are working on and this specialized models, how does that story evolve there? What does AI agents and specialized models from OpenLedger do in the world?

Ram Kumar (23:05):

If you take look at the current agent market, it’s more as an entertainment where people use agents to tweet and give shout-outs, and probably maybe like an agent twin, which you can interact with, which are very initial levels of what it can potentially do. But if you want to go much more deeper and make agents maybe to trade for you… There are some trading agents today, but they’re not very efficient. But if you want an agent which is really efficient and which has the knowledge that is necessary to execute a trade and probably return millions of dollars for you, you would want something which is an expert in that particular feed, right? For that, if you want to build a model, you need to have very specialized domain-specific data. That is what you can do on top. You can collect domain-specific data and build models which are domain-specific for example, like trading which could have much more precise output compared to existing model that is out there.

(24:09):

So you can build much more purposeful agents on top of this. You could build an agent for governance on top of this, which understand [inaudible 00:24:19] three governance much more in detail. You could build agents which give you web3 alpha. You can just train them with all the history of web3 alpha that has been happening today, that’s happened till today. And then you can also give a RAG which has the current news that is happening, and you can have a web3 alpha data and a web3 alpha model on top of here. And then you can build an agent which goes ahead and shows out all the alpha that is happening to you on your Telegram or your Twitter or wherever. So purposeful agents which use these useful unique models can be built a top of this and it could be an app, it could be a service that can also use us. So I guess, that’s how we could add value to the current agent scenes that is happening today

Nick (25:06):

As is common with most web3 projects and ecosystem, there’s different stakeholders that can participate in an ecosystem and contribute value. It’s the case at OpenLedger, as far as I can tell, and you’ll have to correct me if I’m wrong, there’s three primary stakeholder groups, data nets, models and agents. Can you describe what those are and maybe correct where I’m wrong there?

Ram Kumar (25:28):

Yeah, I think we’ve gone a bit about that as well, but maybe I can give you some idea. So as I told you, we built specialized models on top of this and in order to have a specialized model, you need to have… The first step to add to that is to actually collect specialized data. That is where our data nets come in. You can use our data nets, think of them as contracts. You could create a contract for, let’s say, solid data collection and start to collect solidity data. You can write a rule engine on what kind of data you want to collect and how the data has to be, and you can use our validator set. We work with Eigenlayer for validation. So we can use Eigenlayer AVS operators to validate the data that is being collected on your data net.

(26:11):

As the data is contributed, it is validated and then it’s stored on-chain. The hash of the data is stored on-chain, and the data could be in the location that you choose to. And then the wallet of each contributor, it could be a subject matter expert or a business contributing for that, the wallet address which is tagged to that particular data is put on-chain as well. This is the first phase. This is where the data net is used like a repository to collect data. And then model training happens. You could go ahead and take this dataset once it’s collected. Anyone is published in this platform. So they could find these data repositories that are available to use those data repositories to train models on top of this, where they start fine-tuning models using a lot of fine-tuning. They could use any Llama model or other opensource model that will be posted in our platform to go ahead and train with this data. Or take the model as it is, train the model and then provide this data as a RAG to these models, and then make them available.

(27:10):

So you can either fine-tune these models with the data that we’ve collected, or you could provide a RAG to these models and then host it in a platform. Now, you’ve hosted a fine-tuned specialized model in our platform, which can then be accessed by apps and agents. This could be an existing agent that is already operating which needs better models to make better outputs. They can use them, or you can use our interface where we have an agentic framework available. We’ll be working with other open-source agentic frameworks, and you could use any of these frameworks to go ahead and launch an agent and connect that with any of these models that is hosted in our platform to execute and do your agentic test. Or it could be an app or another app which just needs an AI service call. You could go ahead and use our APIs to call in for that.

(27:59):

Yeah. That’s the third agentic or the app layer. So these are the three layers that I build on OpenLedger. And what connects all of these layers is the attribution, as I told you. So once you’re able to attribute the data provider to the actual output of a model, you ensure that a data provider knows where his data is traveling, which model it’s being used for and how he’s going to get benefited based on the model’s usage. He’ll be able to track all of that. We’ll be able to have that on-chain. I think we’re able to see that. The model creator, once he knows that how a data is influencing his own model would know that the model performs in a certain way when a particular data is collected. So they can optimize the data that they want to collect and how the data’s usage as well.

(28:47):

So they can figure out what are the unuseful data being collected. They could refine the model much more. They could prune the model much more as they start using this model and start seeing how the data’s being used as well. And then the third is the app or the agent side. Because of attribution, they know where the data is originating, or they know the source. They know that the data is not probably not biased, and they can verify that. So they have better trust in using our platform and using our model. Yes, so that’s how these three individual layers in OpenLedger work and how attribution connects all these three layers.

Nick (29:28):

I want to ask you this question about the importance of decentralization. So when it comes to the whole blockchain crypto ethos, clearly a lot of people have strong opinions about the importance of decentralization, but if we narrow it down to this discussion around AI and why decentralization is important for AI, how would you describe that?

Ram Kumar (29:48):

So I believe in this factor that you don’t have to decentralize everything. So you have to decentralize wherever it is necessary. A quick example, bootstrapping is one of the biggest places where you can use decentralization, where you try to bring in large group of people from various parts of the world and make them contribute resources and use decentralized primitives like on-chain rewards and on-chain verifications to make sure that they’re rewarded equally. So that really helps for the industry. We’ve seen that happening with decentralized GPUs, right? People across the globe contributed GPUs, and they were rewarded for that. Developers got GPUs at much, much lower cost compared to the cost that you would get from a centralized GPU provider. That’s what we are trying to work with data. We’re going to provide a platform where anyone across the globe can contribute data and models can be built on top of there, and they get paid for that as these models are being used.

(30:53):

So bootstrapping, model creation, and getting data at much lower cost compared to the current existing platforms. We also encourage, on the other hand… This is bootstrapping. Other one is the verification aspect of decentralization. You can verify and you can see the problems on-chain that it’s immutable. You can believe in this cluster system. So this basically enables anyone across this globe, like businesses which have the proprietary data to contribute the data. They can confidently know that the data is not being manipulated. They would need to figure out where this data travels and which model it’s being trained on. They would also be able to see on-chain whether the model was used and if their data was used on the output and whether they’re getting paid for that. They’d be able to see that on-chain.

(31:42):

So a business knows that if their data is being consumed, they’re attributed back rightly, and they’re getting fairly rewarded for that. We use blockchain again over there too and show that verification on-chain, which builds this cluster system. Yeah. I would say these two are one of the biggest reasons why we use and why I think blockchain should be used for a technology like AI. It just makes the entire system much more transparent and much more accessible.

Nick (32:14):

When we talk about the original vision of OpenLedger and then where you are today, I know there’s been some news and announcements related to the Testnet, but what can you tell us about what’s going on in the OpenLedger community and how people can get involved?

Ram Kumar (32:27):

Yeah. Our Testnet is going live in about a few days. We are running internal test run with our community, and then we open up. So the idea behind this is go on a face manner. We would love to get constant feedback from our community as we build this product out and as we have it for testing. First layer to that is the data [inaudible 00:32:51], and we are building something called a data implicit layer, which is built by an ex-Google, DeepMind, guy. He’s building a product which is… Think of that to be a much more deeper and a high vertical version of Commoncom. So he’s using OpenLedger’s infrastructure to collect internet data. It’s called DIL layer, which is a common data pool that will be part of a data net, where anyone can use this while they’re fine-tuning models to fill in data wherever is necessary. That’s the first phase, and then we will roll out other phases where anyone can start launching their own data nets on top of this.

(33:26):

And then that will evolve on to letting people launch models, claiming those models and then fine-tuning those models so the data that is collected, experimenting with RAG, converting this data as a RAG and then providing that with models as well. All of that will happen as we progress through the Testnet.

Nick (33:43):

How can people learn more and sign up for more information and stay in touch with everything?

Ram Kumar (33:47):

Yeah. We would give out rewards as points in our platform, and we want more people to participate and we would increase them, and we want to reward them for that. So more news about that is available in our Twitter handle at openledger.hq and you could also visit our website, openledger.xyz. You would find a lot of information there, and we’ve been constantly communicating to our community as well. You could also join our Discord channel, which is available in our website, which has much more quicker updates. You could ask questions, and our team will be able to reach out to you.

Nick (34:23):

I want to ask you a couple of questions about the nature of entrepreneurship and your experience. You’ve worked and started several firms including this project now, OpenLedger. What have you learned about the nature of entrepreneurship based on your experiences, what it takes to succeed?

Ram Kumar (34:39):

I think perseverance is very important as an entrepreneur. From 2017, as we started building projects and we worked with Metrum projects in OpenLedger, we had one of the core contributors. What we’ve learned is, I think, perseverance is the key for all of this, especially in an industry like crypto where it’s so volatile. If the market goes down, you don’t know when it’s going to really come back again. Unfortunately, we are all dependent on the market at least for the next few years. I hope some days is later on. In that case, you would need to have perseverance. You would need to have a positive mindset. It’s a very difficult job. You need to have a lot of passion in the work that you do and passion, what will keep you going on every day because not every day would be a happy, good lucky day. So I guess, these three are the most important factors for any entrepreneur to have.

Nick (35:34):

What about how you think about startups? So if an entrepreneur needs to have a couple of those qualities, what do you think it’s important when it comes to launching a project, the types of activities or the type of factors that determine the success of a good idea?

Ram Kumar (35:51):

I guess, you need to see past all the noise that is there. Again, especially before [inaudible 00:36:00] crypto, you would start seeing a lot of noise, and you might want to shift your mind to the one that is really happening. A company should have a very clear focus and a belief that what they’re building on will work. That doesn’t mean that if no one is going to like your project and you just keep on building something which is useless. You need make sure that your product is useful. You need to make sure that the people would like what you’re building and would use what you’re building, but never be distracted by all the noise that is there. Be focused and bent towards something which you think is useful for the people. I think that always helps startups. Also, getting constant feedback from your community and people in the industry is really helpful, always.

(36:46):

I think these two are key factors for betterment of a product. I a product is good, everything will fall in place. Everyone will talk about you. People will start using a product. You don’t have to worry about funding and authority and all that. So as long as you can concentrate and put your mind into that, I think, other things will fall into place.

Nick (37:06):

As part of the funding round for OpenLedger, you got to work with other founders, and you mentioned them at the beginning of the interview, some of the projects like Eigenlayer and Polygon. I’m curious because of some of the work you’ve done there and some of the people you’ve met, what have you learned from them about leadership or management?

Ram Kumar (37:26):

We’ve been in the industry from 2017, so I met a lot of founders. They’ve always thought many things about the industry, helped us see beyond the hypes that are there. Especially someone like Sreeram Kannan and Sandeep and all these folks. They’ve been really helpful to us in terms of just guiding us whenever we had some questions or really showing what’s [inaudible 00:37:54] and decentralization is all about right. We’ve always seen their passion towards the industry, and that really motivated us in most whenever we had a very bad downtime. There has been very key players during this travel for us from 2017, who really structured our organization and helped us work with very interesting people.

(38:18):

Entrepreneurs help each other. They know what really goes wrong when a company is being built, and entrepreneurs are usually very empathetic in that case. That has really helped us in terms of shaping our company. Some conversations that we had with people have really changed how the outcome of a company is. Those few minutes have really guided us to doing something what we’re doing now. Yeah. People are always what make industries happen and fall as well. So fortunately, we have met and spoken with a lot of these entrepreneurs who have really helped us a lot.

Nick (38:56):

In another interview, or I think it might’ve been a presentation, you talked about how it’s likely we’ll run out of real world data to train AI. And listeners might know this terminology of synthetic data, but can you talk about that? That’s an interesting thing.

Ram Kumar (39:12):

Synthetic data is one form of data that we think would be a game changer. We encourage a lot of people to use our network to build synthetic data. For example, the data analysis layer that we are building. Once the data is collected from the internet, we use synthetic data to augment that because you would need to use synthetic data wherever that is necessary to really augment and build large amounts of useful data, I would say. Synthetic data has been really good in terms of privacy preserving. For example, financial data where you have a lot of private information, synthetic data can be used to mask that. And scenarios where you don’t really really have real world data. For example, like in vision models, which is you build for training autonomous vehicles, you can’t possibly recreate every other scenario, every other possible scenarios of how a vehicle would get into traffic and how would it get into some particular accident or something like that, or a particular terrain.

(40:15):

You would not be able to emulate everything that is possible. Synthetic data plays a real use case over there, where you’ve seen this happening in most of the vision models that have been trained today. We also see this happening in other sectors where you don’t get data very easily, which is not… where it’s not very accessible. Synthetic data really fills in over there as well. So the two ways synthetic data can be used, one is for augmenting and creating more data, and the second is for privacy.

Nick (40:48):

What excites you most about the future of AI?

Ram Kumar (40:51):

I think future of AI is going to be big. As I told you, like how electricity changed the world. It’s a general purpose innovation. Very similarly, that’s how AI is going to be. I think it’s a given, everybody knows this. What it could really solve is in medical sector. I would love to see a lot of innovations there. I think that’s where humankind is going backwards. Our health is not really great, and we don’t really care about that because we are running behind various other things. I think that is changing. A lot of people are getting aware, so that’s good. So as we are getting aware, if we could have some innovations in that particular space, it’ll be amazing. Having AI is like you’re having a huge army of people now working along with you to animate something where you research something that you just had few resources for.

(41:41):

It just speeds up every other research and innovations that we have been working on. I think that’s great overall, for human in a society. That’s the simple mode, right? It’s just going to skyrocket every other work that we’re doing, how just laying roads and transport really connected people a lot and made businesses to do much more quicker commerce. Modern computer extended that and internet made it much more accessible than I could talk to anyone across the globe. Very similarly, I think it’s going to skyrocket all of this. It’s gone 100X all of this. Yeah, we are all waiting to see that, and we’ve seen updates that is happening. We see everyday what are the innovations that is happening. Let’s see how it goes. We are making a very, very, very small dent in that by creating a platform that can enable businesses to come forward and subject matter expert to come forward and contribute their data for AI. So hopefully, we also make some change. Yeah.

Nick (42:51):

Well, Ram now we’ve reached a point in the podcast where I’m going to ask you the GRTiQ10. These are 10 questions I ask each guest of the podcast every week. It gives us a chance to learn more about you personally. I always hope listeners will learn something new, try something different, or achieve more in their own life. So Ram, are you ready for the GRTiQ 10?

Ram Kumar (43:08):

Sure, absolutely.

Nick (43:20):

What book or article has had the most impact on your life?

Ram Kumar (43:26):

I’ve read a lot of books. Okay. I didn’t expect this question. It’s very hard to say which book really changes your life. I’m more a movie guy, but books, I think Atomic Habits is very interesting. I love that. So I’ve read it multiple times. Yeah, I would say Atomic Habits.

Nick (43:51):

Great. Let’s follow up on what you said there. Is there a movie or a TV show that you would recommend everybody’s got to watch?

Ram Kumar (43:58):

There is this all time favorite movie of mine, it’s called Into the Wild. It’s by Sean Penn. He directed this movie, and it’s about a guy who gets into the wild, as the movie says. He’s Black. He earned his college degree and all that. And then he just learns about life and he realizes what life is all about. That movie just changed the way I see life and changed a lot of things, a lot of perspective of life. Yeah, I would recommend anyone to watch that movie.

Nick (44:33):

If you could only listen to one music album for the rest of your life, which one would you choose?

Ram Kumar (44:39):

Okay. I wouldn’t want to just listen to one music album all my life because music is something I love, and I would want to listen to as much as new music albums and music as possible. With AI, I don’t think you should say that because AI is going to create so much more innovations in the music industry and it’s going to create music that we’ve never heard before in our lives. Yeah. So I wouldn’t want to listen to one music album.

Nick (45:07):

What’s the best advice someone’s ever given to you?

Ram Kumar (45:11):

So I had this advice from my grandmother, actually. She was saying that, “Even if you pour gallons and gallons of water to a flower,” to a plant, basically, “it’s not going to bloom until its right time.” So I guess, that is something that I always think about. Whatever efforts that you put in and things will turn out to be right at the right time.

Nick (45:37):

What’s one thing you’ve learned in your life that you don’t think most other people have learned or know yet?

Ram Kumar (45:44):

I think most of the people would have very similar experience. In fact, if you take a look at it, even though we are also different, we have very similar experiences. I’m sure that other people would also have similar experiences like me. But one thing that I have always learned, I think, like most of the other people also should learn is, again, as they always say, perseverance. Really have the mode to stand and withstand all the hardships in life. One day things will change. I think that is something that everyone needs to learn. And also be empathetic, really think about the people that you talk to. Not everyone is having a good day as you might want or have. Just be empathetic and caring.

Nick (46:24):

What’s the best life hack you’ve discovered for yourself?

Ram Kumar (46:28):

There is this thing that I’ve learned recently where you can go to any chatbot like Alarma model or what model you guys use. You could go ahead and say, “How is your life going to be? What do you envision that you want to become?” I saw this on Twitter. I just tried it out. But you tell it what you want to become. The problem is basically to tell it what you want to become in the next 30 years, and you describe your goals and ambitions. And then you ask it to describe a day in your life, and it just gives you this day in your life where it’s very fascinating to look at, and it’s very motivating as well to look at that. It just gives you a good hack to work much more harder and work much more insightful, as you know, that is how your life is going to be for you much forward.

Nick (47:20):

Then Ram, based on your own life experience and observations, what’s the one habit or characteristic that you think best explains how people find success in life?

Ram Kumar (47:30):

Being truthful to yourself. As long as you’re being truthful to yourself and you follow that, I think that leads into success. I’ve seen that. I’ve seen that in a lot of scenarios, I’ve seen that and hear it from a lot of people as well.

Nick (47:45):

And then Ram, the final three questions are complete the sentence type questions. So the first one is, the thing that most excites me about the future of web3 is-

Ram Kumar (47:55):

Decentralized AI.

Nick (47:57):

And how about this one? If you’re on X or Twitter, whatever people call it, then you should be following-

Ram Kumar (48:02):

Andrej Karpathy. I think he posts excellent stuff on Twitter about AI.

Nick (48:10):

And then the final question, Ram, I’m happiest when-

Ram Kumar (48:14):

I’m happiest when I see my family.

Nick (48:24):

Ram, thank you so much for taking time to join the GRTiQ Podcast, talking to us about OpenLedger and all the exciting things that are happening there. I’ll put links in the show note for any listeners that want to get involved and learn more about the launch of Testnet and other things related to the ecosystem. If listeners want to stay in touch with you, follow the things you are working on, what’s the best way for them to stay in touch?

Ram Kumar (48:44):

They could reach out to me on my Twitter at Ram Kumar Tweet and my Dms are open.

YOUR SUPPORT

Please support this project
by becoming a subscriber!

CONTINUE THE CONVERSATION

FOLLOW US

DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates.  This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.

©GRTIQ.com