Axelar AMA: Tarun Chitra from Gauntlet w/ Sergey Gorbunov

The path to bestowing true interconnectivity between blockchains is fraught with many pitfalls that a capable team must be able to anticipate before launch. With the help of a seasoned sherpa, even the most perilous precipices can be reached with ease.

NEWS
TECHNOLOGY
January 17, 2023
Galen Moore
January 17, 2023

Table of Contents

Back to blog

The path to bestowing true interconnectivity between blockchains is fraught with many pitfalls that a capable team must be able to anticipate before launch. With the help of a seasoned sherpa, even the most perilous precipices can be reached with relative ease.

Tarun Chitra is a quantitative researcher who left Wall Street to found Gauntlet, a testing ground for new financial instruments in the crypto space, before going on to become a partner at Robot Ventures, where he uses the lessons he learned from both traditional and decentralized finance to scout and back talented new developers.

Sergey Gorbunov, co-founder of Axelar, met up with Tarun to talk about the growth of DeFi over the past few years, making DeFi efficient without ceding to centralization, and how an IBC will open the door to scores of new financial instruments yet to be seen in the crypto-space.

You can listen to the full recording of the Tarun Chitra AMA on Axelar’s YouTube channel. Subscribe there for live coding sessions, high-level walk-throughs and more AMAs with key Axelar partners and leaders in Web3. Or, join us on Discord & follow us on Twitter for updates and discussion related to events like this!

Tarun:

Hey everyone, I'm Tarun, founder of Gauntlet and also an investor at Robot Ventures. Y’know, I've spent a bunch of time thinking about sort of different things in the cross-chain world, and I’m excited to talk with Sergey about them.

Sergey:

Awesome, great to have you here. Hey everyone, this is Sergey, I'm one of the co-founders of Axelar. Pretty excited for today's AMA, so why don't we get started? Tarun, do you wanna go first and tell us a little bit more? What do you do most of your days? I know you write some papers, you talk to many projects, you invest... Why don't you tell us, how does your day look?

Tarun:

Yeah, 95% of the time is actually running my company. What we do is we help protocols optimize their parameters. So: how do you choose interest rates? How do you choose margin requirements? We're constantly monitoring what's happening in the market for when something’s changed, something like a big event that might mean that a DEX or a lending protocol needs to adapt their parameters, we submit the governance proposals and work directly in-house. I also invest, but I'm mainly a founder who invests in other founders, not really a VC leading rounds. But that's roughly how to describe my day. A lot of the time I’m basically just scouring through people's code and papers to figure out if there's some meta-level math problem that people are solving, in slightly different ways. That led to our series of papers formalizing how certain types of automated market-makers (AMMs) work. Of course, I'm definitely searching for some cross-chain problems that make a lot of sense, formalized.

Sergey:

Awesome. What's a good example of a DeFi project that you guys helped? I don't know if you can talk about it, but I'm curious, is there a specific parameter that you're most proud of, or an optimization that you guys did that would've led to devastating effects otherwise?

Tarun:

Yeah, I think in early 2020, there was this very contentious issue in Compound. We work with a lot of lending protocols on monitoring collateral factors; collateral factors are roughly the equivalent of margin requirements, like how much collateral to post in order to take out the loan. At that time, it was actually very contentious that, “Hey, we're trying to argue that we need to make a collateral factor change that would cause at least a small number of loans to have some partial liquidations.” There was a lot of contention over whether it was a good idea to do that or not. We eventually prevailed.

A month later, when we were able to retroactively look at that choice, it definitely saved a few hundred million dollars of liquidation. That was one that I would say I’m pretty proud of. Then also for AMMs, we do a lot of incentive optimization and that's something where it's not about reducing risk, but about increasing net revenue to the protocol without having to spend too much in acquisition costs. So I've improved a lot of things, like in Sushi helping make $20 million more for the LPs in the protocols.

Sergey:

Nice. Awesome. So let's talk about cross-chain a little bit. I'm curious, when did you start thinking about cross-chain in general? For me personally, it started a couple years back when we shipped our grant and were looking for various ways to connect it and interact with other ecosystems. That’s the genesis that led us to build Axelar: what was it for you, when did it start?

Tarun:

Maybe in 2018 or early 2019; I was trying to understand the architectural differences between Polkadot and Cosmos. At that time, no one was calling it cross-chain, or bridging, or foreign function interface for different virtual machines. Instead they were calling it “app-chains,” or “parachains,'' or “shards” or whatever. Somehow, the communication layer wasn’t specified. At that time, there was a lot of stuff in the Ethereum world that was extremely underspecified in terms of how shards would communicate with each other. Partially because I came from a centralized distributed system background as an engineer and I just felt some of the specs were not fully-specified.

Sergey:

Yeah, even in Cosmos or Polkadot, those protocols were still in development. I think Polkadot is still being shipped the last month and a half and IBC launched last year, right? So when you started thinking about it, the whole space was undefined. The design decisions and everything else that went into those protocols were trying to solve either scalability issues or growing composability within those two ecosystems. Would you say that's correct?

Tarun:

Yeah, I think that's correct, to some extent this is still true, of course, much less for Cosmos nowadays. But both of those ecosystems were–I wouldn't say anti-DeFi–but were like, “we want non-financial applications” and “we're really focused on this idea of having a world of applications to interact with each other, not a world of financial applications.” I think the communities that built both those protocols were a little more anti-finance in some ways. They were more focused on scalability and how to build decentralized Twitter versus, “you could do better atomic transaction guarantee for order books” or “derivative exchanges” or “lending protocols.” Once DeFi started growing on ETH in late 2020, it started becoming clear that the cross-chain world was going to be way more complicated than the raw-ETH world. That was the time I noticed the Cosmos community realizing that they should be thinking about DeFi when designing these things, because that will be the first application, no matter what.

Sergey:

Makes sense. Also around then, I remember a lot of talk about things like atomic swaps. I remember when we were starting out, Axelar people saying, “can't you just do everything with atomic swaps and call it a day?” <Laugh>. We had to convince people that atomic swaps are not gonna solve their problems in composability. Did you see the same back then?

Tarun:

Yeah, it was definitely interesting because in 2019 there was still a bit of negative belief in the account model. There were a ton of new UTXO-scalability chains that were promising a lot of things and people were not totally convinced that we were going to end up with just these applications that were a bunch of shared global states. Also, some notion of agreement on that state in a sort of economic manner. I think the invention of the uniswaps destroyed that thesis to some extent and the world had to adapt to that. Of course, then this idea of needing a global state too, and there's contention that just makes it impossible with atomic-swap.

Sergey:

Makes sense. So, fast forward a couple of years, we're now talking about cross-chain a lot more often; even a year ago, it wasn't a theme to a larger extent. A lot of people still were thinking that Ethereum is gonna upgrade and take over and all the other chains are not gonna matter. I think last year we saw a lot of that perception being changed. Today, we live in a world with different solutions, different approaches. What do you think about them? How would you think about their security? What do you think about cross-chain security?

Tarun:

I think one of the first questions is, what is the collateral that exists in these systems? Because there's no fully collateral-free system that you can go cross-chain with, whether it's a staking chain that's intermediating, like in the case of Axelar, whether it's sort of something that has some flavor of atomic swap, but you lock up collateral on one chain and create it on the other, whether it's Wormhole-like where–

Sergey:

The bank is the collateral?

Tarun:

Yeah, exactly. Jump [Trading] is the collateral. First, start thinking about what the collateral is and what the collateral quality is, on its own, on the other chain. Then, start thinking about their correlation and what types of events would cause them to be correlated in a way that would mean that by the time you go across the bridge, the collateral you put down is worth less than the collateral you're taking out and what conditions would cause that. After that, reason about the application-level value of how much value you are actually transferring across this thing. Finally, what are the latency characteristics you care about? That's sort-of my Maslow's hierarchy of cross-chain security.

Sergey:

Good. So I guess let's start with the collateral question: it's not a question unique to cross-chain, right? Every time you talk about a base-chain layer and you have validators that have something at stake or nothing at stake, you have instances of applications or assets built on top that could have more value, right? That could have a lot more at stake at the application layer than what the validators have. So I’m curious how you think about the differences between this collateral, on a proof-of-stake type of system in a standalone environment and what validators have at stake in a cross-chain environment.

Tarun:

That's a great question. One of the important things that has been clear from a lot of cross-chain attacks is, what collateral is might not be what you think it is, in the sense that there's a synthetic asset representing the real asset and they're supposed to stay pegged to be worth the same amount, but something happens such that they're not pegged. Either there's a huge mint event, maybe some of the supply gets burned on one side, maybe there's sort of some other reason that the peg doesn't hold, like in the case of a bridge like Synapse where there's curve pools on each side, if the curve pool doesn't have enough liquidity, someone can do an Oracle attack effectively.

One thing to remember is that you have to think about what the actual representation, the remote representation on the destination-chain, is of the actual asset you have. In some sense, the question is whether it’s synthetic or native. The second thing is, if it's yield-bearing, how does the yield accumulate, and are you paying some type of premium to move your collateral across? That's more the financial way of thinking about it. From the computer science way, it's having two machines that have different languages and they're not exactly bit-wise identical in a lot of ways, so there's some translation that's being done. If that translation isn't done with good enough fidelity in all circumstances, then you can have things that are supposed to be equivalent on both sides not be equivalent. But there's a duality of these two in cross-chain systems where you have to think of it both from the finance standpoint that collateral should somehow be representative on both sides as the same thing, as well as the machine translation piece needs to be things that are supposed to be equal on both sides should be equal.

Sergey:

That makes sense. One of the things we've been thinking hard about, as we're thinking about Axelar, is how do you preserve all these functionalities in a decentralized way? Through a decentralized protocol you can certainly enforce some of these properties that you mentioned at an application layer or in a centralized way. What do you think about decentralized solutions versus centralized approaches?

Tarun:

The centralized approach is effectively the user saying, I trust the collateral guarantee of this one entity, and I believe this one entity, no matter what, my collateral will be made whole. We've learned on the multisig bridges and we've realized that yes, there does need to be a bank on the other side; effectively, someone has to act like a bank, but in the decentralized world, you have to have some way of people being able to pool assets who are rewarded effectively insurance premiums that the users are paying, whether it's intaking rewards, whether it's in fees, liquid rewards, and those premia have to be able to aggregate a big enough pool of assets for covering shortfalls if there is one of these variance violations.

I think obviously finding the right incentive structure to do that, and the right sort of hardware and cryptography infrastructure it’s obviously the biggest thing that's difficult to get right in the decentralized world. It's always easier to find capital than actual validators like physical vouchers, not people who are delegated to and are representing multiple forms of capital. There's always this discrepancy and mismatch between how much capital is being used for this effective insurance or validity guarantee and how much capital is actually used for running nodes for actually doing the state transitions. I think that makes it a lot harder because the centralized world, you just bucket that into one thing you don't even think about how you're paying the premium. You don't have to think about how you're calculating that quantity and you don't have to think about how everyone has to agree on that quantity being calculated correctly. You just say, “Okay, I trust Coinbase, and if anything goes wrong, they'll pay me exactly what I put in.” That's one of the biggest things, that the designed space of the system really explodes when you make it centralized, which is both a huge opportunity and also a huge challenge.

Sergey:

For sure. But one thing with the decentralized approaches then the balance between the collateral or what validators you have and what the value that you're securing doesn't have to be as pegged as it is in the centralized systems. In the sense that, look at proof-of-stake networks, let's say Cosmos chain. Validators themselves have very little actually at stake, right? To secure the proof-of-stake network, other than whatever one atom that they put in there and their reputation. If they get slashed, they get slashed with someone else's money. What do you think about that? What is it that the validators should have at stake? Should they not have anything at stake? Do you think decentralization in this case allows you to have more freedom about economic decisions you make because the security relies on a decentralized valid set?

Tarun:

Yeah. I think they do effectively have to have something at stake, right? A very boring, trad-fi example of this is the reason that it's hard to become an insurance provider in most countries. In theory, to provide insurance you could just go to someone and say, “here's a document that says if some event happens that’s universally agreed on, then I'm gonna pay out a certain amount to you.” I can just write that document, give it to you, you can pay me a premium, and then I keep collecting it and then I reinvest it and hopefully it covers. One of the things that governments do is they force you to put up a ton of capital as a requirement with the government in order to get an insurance license. In some ways, staking is the same thing, except there's not as much red tape to placing their capital as a form of insurance buffer. One of the hardest parts in proof-of-stake versus proof-of-work is that in proof-of-stake, the application on top of the chain can cannibalize the security of the underlying chain. If, for some reason, the synthetic assets are worth more than the blockchain, you can have issues. The other thing is vampire attacks where an application promises more yield than the underlying base chain staking reward, and it can cannibalize stake, and then it's able to accumulate 33% in a pool and that can be used in a deleterious manner. You have to think about the opportunity cost of capital, whenever reasoning about these systems: in the decentralized world, the opportunity cost is actually quite a bit complicated, because there's not a rate that people agree on as the minimum rate. When designing these things, you have to think about the economics to ensure that you're compensating validators for locking up capital in a way that ensures their opportunity cost is not that high.

Sergey:

For sure, makes sense. Now let's talk a little bit about MEV, just to define for those that are listening– MEV is the miner extractable value. That's the ability for the validators or miners to either influence the order of transactions or influence the order of events to try to extract some value for themselves as they're processing or securing some of the underlying chains. When we talk about cross-chain these questions also come up, so what do you think about it? How would you define a cross-chain MEV?

Tarun:

This is a fun question because people will get into fights over the definition of this. I prefer defining it in terms of any type of excess profit that validators can get from adjusting transactions, adding or removing transactions in a way that's still consistent with the consensus protocol, but not consistent with users’ intent. Intent is obviously very hard to specify when transaction ordering is not up to the user, but their intent is not to be placed behind a big transaction, but in some sense, that's the broad strokes definition. Another version of this might be that if validators see users submit a bunch of transactions and the transactions cause natural arbitrage and the validators put in those arbitrage transactions.

From an efficient market standpoint, that is an MEV. But one of the places, the flashbot community and the Solana community would disagree is that flash bots would argue that the miners earned a profit from executing that arbitrage by adding in a transaction. In the Solana world, people would probably say that's actually the market being efficient. It's fine if the miners are the one doing it, why does it matter if it's them or someone else? There's a very fine line there, this is why I use the nebulous concept of user intent partially because I think on each network, each community has a different basic minimum viable understanding of intent of a transaction. In Solana, for instance, intent is actually quite different when there's this no-mempool behavior, and you have to keep retrying your transaction, versus somewhere where there's a mempool, but you may have to kind of sit there and wait until the gas price returns to what you are willing to pay.

Apologies for not being a super mathematically formal definition, but when you go cross-chain, individuals might have different definitions of what qualifies. For the cross-chain MEV, it's a little more simple once you take that definition; you have a user's intent on one chain, they make a transaction and one of the validator sets of either chain decides to change the intent on either the source or destination-chain, and change the execution with regard to intent. Again, because the consensus protocols don't agree, what the user may have submitted to the source-chain, if it was for a source-chain program, may have not been accepted by the consensus of the source-chain. It may actually have been accepted by the consensus destination-chain and the validator can take advantage of that difference in both consensus or virtual machine differences. So the user–as an intent validator–adds, removes, reorders the transaction, in a manner that goes against the intent and also generates a positive expected value for the validating.

Sergey:

One thing when I think about a cross-chain and MEV, you now could have potentially even multiple requests from multiple users submitted on different source-chains that are all going through the same destination-chain to the same application. So how would you define an MEV in a multi-user setting? The reason I'm asking is because, when a user submits an intent on a single chain, it's one intent, you have a notion of time and you can say when the transaction has gone through, but what about a multi-user setting? There's no global clock across these chains, everybody runs on their own consensus speed. Do you think the single-user definition generalizes to multi-user or not?

Tarun:

Yeah, that's a great question. There's a sense that going cross-chain is like time traveling, but randomly time traveling because you have no guarantees that the clocks are synchronized. You just hope that the clocks are close enough, up to some error. You could be going back in time, relative to a block you saw if the destination-chain reorg’d or didn't finalize correctly, where there's a view change. So, the point is definitely taken that the multi-user setting and the multi-time setting / multi-ordering setting is not even a partially ordered set of user intents. But that being said, there’s still a sense in which a user could easily generate zero-knowledge proofs of the state transition that they wanted. You could imagine there's a world where a user says, “okay, I have both blockchains synced, and I have this transaction on the source-chain, this transaction on the destination-chain, I've simulated both of them locally. So I've executed them under what I believe the environment is at the current time that I have perceived them and I generate zero-knowledge proofs of the expected output. Then I submit that alongside the actual transactions to the validators. I can measure if the validators actually executed and got me to that state”. Now, I have no guarantees of getting to that state, the environment could have changed. Someone could have published a transaction for a block of transactions before the environment I expected. One question is, if I do that experiment a million times, what percentage does the user generate a proof that matches what their expectation is? That's the probabilistic definition of what user intent is in this multi-tendency, multi-user, multi-talk world. Now, of course it's very hard for the user to do that, and if you're a client, it's not like you're going to be trying to simulate the expected end state of your transaction, but if you were trying to formally define it, that would at least give you some notion of consistency. Something where there's a lot of MEV you would expect the probability that the user's proof matches the validator proof to be quite low. Whereas, in a world where there's not much, you'd expect those being the same being to be quite similar.

Sergey:

But even in that definition–back to the multi-user setting–you can do it and you can simulate it locally, but assuming there’s a lot of transactions and other users, if you submitted something from other chains, if you have no visibility, then most of the time you'll get the wrong output when it's executed on the destination-chain.

Tarun:

For sure. Also, the problem here that's confounding is it depends on application-level complexity. If you have something like a Uniswap transaction or a [constant function market maker] transaction, CFMMs have this property that they're like translation in-variance. If your slippage limit is really large then no matter where in the block I placed the trade, the trade will still execute provided there's non-zero liquidity, which is very different than something like an order book, where if I add an order to a certain price level and the rest of the book has moved greater than a certain number of ticks away from where that order was defined, my order might be invalid. In the case of these Uniswap LP shares, non-concentrated, it actually has this translation in-variance and the translation in-variance corresponds to how sensitive you are to time execution.

Because you can get executed in any swap, at any time provided there's this bare minimum constraint of non-zero liquidity. Whereas, for more complicated applications, there may be much more long-range dependencies. Let's say there's a program that relies on an oracle and when the oracle’s upgraded, it goes through and marks every order in an order book, then, based on the marks it has, it goes and liquidates some of them. Something like that is extremely sensitive to where in the block and order was updated. Maybe there's a user sending a cross-chain transaction saying, “hey, I'm adding margin. I'm putting up ETH as collateral, I'm minting Wormhole ETH on the other side, I'm adding Wormhole ETH to my lending-pool collateral base, and because I think there's gonna be an oracle update that's about to liquidate me, then that's not time translation in-variant, right?”

That depends on the sort of order translation in-variant. That actually depends very strongly on precisely whether I get executed before or after the Oracle update. I think one of the reasons defining the sort of multi-user cases is even harder is that it’s quite application-specific. If you try to define it in a purely non-application-specific manner, you will be forced to have this very ugly two-user case. What's the probability that two users both simulated locally and within some up to some error there, the final state that's output for them is equal to what they simulate locally. It ends up being a very hard thing to reason about for a generic calculation that you're making a generic computation.

But I do think in the application-specific instance, when you add in constraints, like order in-variance type of property, or some type of notion of like locality sensitivity of your transaction, you can maybe get something that makes more sense if that makes that, so it's a bit of a cop out answer and I'm saying it has to application-specific and you may have to add in these constraints, but I think the most generic version ends up having this problem where you're taking the product of all the probabilities that all the users get, within some error of the true state they desire. That thing is extremely hard to control or reason about.

Sergey:

For sure. I mean, I claim that you probably even can prove some type of impossibility result that you actually can't estimate for some applications, because you can't predict these things, right?

Tarun:

Yeah, exactly. I think this is the thing about CFMM is that, for me, someone who came from traditional finance and more of a probability and pure-math background, that I find fascinating about CFMM versus order books is that they have this weird and varied property that they're executing. You might get worse execution, but you will still get executed. If you get front-run, whereas in an order book, that's not guaranteed, there's all these very strong-edge cases that have this huge decision tree of whether the transaction’s invalid or valid. I generally think blockchain applications that work really well are ones that have these loose in-variance properties. To your question, the applications you can reason about in this multi-tenant, multi-chain, multi-clock world are the ones that have some properties like this, where they're not super sensitive in a certain way to reordering and adding and deletions, but only in some local manner, like around the transaction that's sent.

Sergey:

For sure. So where do you think MEV should be addressed? You can address it on the source-chain, you can address it on intermediary or connected chains that are moving the traffic. But it feels like it's protecting it at the application-layer, right before transactions are potentially executed. Maybe it is the right thing for some applications; what do you think about it? Where should the MEV be addressed?

Tarun:

I think in the long run, we're going to have some API/interface from the base-chain to the application where application developers can basically describe how they want the MEV execution to happen, like how they would enter an auction. Something like flashbots' auction: the application developer says this particular route, when this function is called, it sends a message to the auction that says, “here's a constraint, any bundle that includes a transaction calling this function cannot do something of this form. It can't put more than five of these transaction calls in a single bundle.” So you're going to have this trade-off between searchers and MEV finders. So I think what will actually happen is the app developers will be able to specify some constraints on their function calls, the call sites and the local environment they're allowed to execute in. Those constraints will be dependent on each virtual machine we're getting executed also, the intermediary that's varying those transactions. I suspect there will be auctions, or some type of mechanism of that similar form on each layer and leg. There'll be an auction on the source-chain, auction in the cross-chain, auction of destination-chain, and each transaction, each bundle is allowed. The application is allowed to have some sort of API that's part of consensus that says there’s some constraints on their execution, even if reordered. In some ways, from a pure theoretical distributed system standpoint, this forces these things we're talking about earlier, these locality constraints. Again, I'm talking about this very informally. I don't know exactly what the definition of them are, but it is clear that successful applications somehow have that built in. I think future applications will basically have to encode it, and the way they'll encode it, th