He Broke Turkish Twitter - Abhijit Mehta

Download MP3

Kevin Riggle: Howdy, folks. This is the War Stories podcast on Critical Point once again.

I'm Kevin Riggle, and I'm here today with Abhijit Mehta, who is a friend of mine from Akamai days, but here to tell us about an incident that he was involved with while he was at Twitter.

Abhijit is, among other things, I think you described it as the lead engineer, like the person who took the engineering and business case for Twitter Blue to Jack Dorsey.

Do I have that right?

Abhijit Mehta: I used to run the engineering team for Twitter Blue.

Yeah, I ran the engineering org behind subscriptions and was the first engineer on Blue.

So yeah, it was an exciting product to work on and an exciting time to be at Twitter.

Kevin: And so every time somebody's like, ah, Elon Musk, creator of Twitter—or whatever the hell they call it now, X—Premium, whatever, I'm like, no, no. That was my friend Abhijit. I know— [Elon] doesn't get to take credit for this. Very important, so... yes.

We'll get into it in just a sec, but first we'll roll the titles.

[music]

Kevin: And we're back.

Once again, this is the War Stories Podcast on Critical Point. We're here with Abhijit Mehta.

Abhijit, can you tell us a little bit about yourself and what got you into a place where you could break production—or where you would be involved in the response to an incident?

Abhijit: Absolutely.

So I'm actually a theoretical physicist by training. As a kid, I've always been interested in technology and how the world works. So yeah, I actually did my PhD in theoretical condensed matter physics, understanding how large collections of electrons interact with each other. Really fun.

I started with quantum phase transitions. And one of the things that always fascinated me was the connection between beautiful theoretical mathematical descriptions of the world, and then the actual messy experiments.

And to kind of cross that chasm, I did a lot of computational work. So even though I was a physicist, most of my research was computing based. I would run quantum Monte Carlo simulations to study complex systems.

So I've always been interested in technology and in computers. It's actually kind of funny as a kid, my
dad, used to run a lot of infrastructure for messaging and other systems at Procter & Gamble and did all this work with IBM mainframes.

So I'm probably one of the youngest people who knows REX, an IBM mainframe language.

Kevin: Yup, yup.

Abjijit: But it was really fun, because I kind of grew up in that. My dad, in the past, he used to teach computer science in New York. And so I was kind of... grew up with that and with that ethos of the power of distributed computing.

But yeah, in any case, one of the fun things growing up was like growing up both with the promise of technology, but also with like that ethos of what production systems could do at scale.

So this is something that always interested me, right? You know, in grad school, I ran simulations on big distributed systems, grids that the NSF would run and things like that.

One of the interesting things about doing this with physics is when can you chop up the system and when can't you. There are some algorithms that it's trivial to parallelize. There are others where if you're simulating a complicated many-body interacting system of electrons, you actually have to do some careful math thinking about how do I split this up in a way that's useful.

Kevin: Right, yeah.

Abhijit: So it was really fun because like on the one hand I'd run things at Duke and take over the machines in the physics department and in computer labs and stuff. It was actually really funny because it was kind of, back then there was like a tight knit community of us who did this at Duke.

So, you know, if you had a paper coming up or you're writing your thesis and you're like, oh, actually I need to collect one more result. You know, you could kind of send an email, hey guys, can you all like pause your jobs for a week? I need this

Kevin: Oh nice, okay.

Abhijit: You know, run sort of a year's worth of compute in a week.

Kevin: Excellent.

Abhijit: You know, so it was fun. It was like the technology was cool. There was kind of a cool social aspect of like, hey, this was at the cutting edge and there were a small group of us who kind of hacked things together to make it work. So this was an interesting learning experience.

And again, I was very fortunate that, I don't know how much you follow physics, but, you know, when they originally built the LHC, there was a thing where one of the detectors, like there was a liquid helium burst and there was damage and some of the runs were delayed by a couple of years.

This is the old, you should pay the grad students more because some soldering failed, whatever.

Kevin: Oh, okay, yeah.

Abhijit: Whatever it was. But NSF had rolled out all this compute capacity on Open Science Grid in anticipation of results coming back from some of these LHC runs to do data analysis on them. And I was kind of just by fortunate timing, I was doing my quantum Monte Carlo runs around then when this thing burst and suddenly like, okay, LHC data is delayed by two years and we have all this compute just sitting there. Cool, I'm gonna go simulate some complex systems

Kevin: Nice.

Abhijit: It was really, I was lucky to be in a position where I also had really great mentors. One of my mentors at Cornell, you know, was really thoughtful in sort of, you know, teaching us how to combine computer science best practices with physics best practices to run these simulations at scale.

Kevin: Oh, that makes a big difference. Coming from a computer science background, academic code has a certain reputation of being very good at the one thing that it does, you know, but built to get that paper out the door and nothing else. And love you all, but yeah.

Abhijit: Which is funny, right? Because then you go to a big company or look at startups and you discover a lot of industry code is like that too.

Kevin: Well that's true too, yeah.

Abhijit: But at least with like some, maybe some testing, maybe using Git, like. Well, I think it's also like, if you're fortunate enough to work at a company with an [engineering] driven culture, right? Like Google was, certainly doing Mapping at Akamai was like that.

Kevin: Akamai was like that, yeah.

Abhijit: Then, you know, things are built by people who understand technology. And yeah, sometimes you make compromises to get something out the door quickly, but people generally get it.

But I'm sure a lot of your listeners have worked at places that are very like PM driven, very deadline driven, where it's like. I don't care if it was hacky, we need to deliver this to the customer yesterday, just get it done. And academic code is really like that. I mean, like when you're writing computer code, kind of the PI in the lab is kind of like a product manager, right? Like they don't really care about the code you're writing, they care about getting the result and doing the science.

And, you know, I was fortunate that, you know, the folks I worked with... some of them were real quantum Monte Carlo experts, and they understood the value of taking the time to build the right computational tools. But that's generally not the culture, right?

The joke is you can even look at any part of a large academic code base, and it's all written in chunks that are roughly the size of a post-doc or a grad student trying to write a paper, right?

Kevin:- Yes, yes, exactly.

Abhijit: Yeah, so, yeah. And I think Cyrus was pretty good in the quantum Monte Carlo package that he... he stewarded in making sure that folks, as they built their little modules for individual papers, that they actually did it in a way that kept the code base maintainable and clean and understandable.

Kevin:- And integrated with the other grad student size projects well enough that—

Abhijit: Sometimes.

Kevin:- Okay. [laughs]

Abhijit: At least integrated with the foundations and platform enough that it didn't mess up anyone else's project.

Kevin:- Great, yeah. Yeah, nice, cool. Okay.

Abhijit: So yeah, so I mean, it was really fun. I had a great time, you know, doing science, playing with computers. You know, it was kind of fun to think that like, I could send out an email and be like, hey guys, I need something and press a button. And suddenly I was responsible for Duke's power bill spiking, right? So.

Kevin:- Oh, okay! [laughs] Okay.

Abhijit: So it was fun.

Kevin:- Yeah, yeah.

Abhijit: So this was kind of the backdrop as I was thinking about what to do next and deciding, hey, do I wanna go do a postdoc, work in a national lab? You know, I got a call from Akamai.

Kevin:- Okay, interesting. How did they hear about you?

Abhijit: So, as you recall, Akamai hires a lot of sort of academic PhD types, right?

Kevin:- Yup.

Abhijit: I think it's kind of funny because my first manager and his manager were both PhD physicists.

Kevin:- Okay, nice.

Abhijit: So it's like, that's not necessarily common in the tech world—

Kevin: No, yeah.

Abhijit: —but working, I got hired into the platform mapping part of Akamai, worrying about DNS servers and mapping out the internet and all of that. So it was really fun because, you know, there's this whole group of like really great engineers who had sort of, who taught me how distributed systems work, how the internet worked, right?

Kevin: Yeah, yeah.

Abhijit: Like it was actually like kind of funny for me, you know, I had always had this impression that like, oh, things must be very, it must work well. There's a structure, right? And then like we'd see something funny happening with one of our DNS servers somewhere and there's some BGP issue and like the way to solve it is like, oh, don't worry. I know a guy in Chicago and we'll call him up and we'll get it fixed, right?

Kevin: Right, yeah. We had drinks last NANOG, I got his card, yeah. We'll go call up Chicago IEX, and yeah, get it sorted out or whatever.

Abhijit: Exactly, yeah. So it was fun. I think Akamai was a really great place to learn because of not just the technology and not just that Tom wrote the papers on consistent hashing and all of that, right? Which was cool, too. It was an awesome place to learn.

After some time on the platform team, I moved over to a part of the company called Akamai Labs. And that's really where I got my first taste of innovation and product development. Got to work with some really smart people there.

Kevin: Charlie Gero was the lead there?

Abhijit: Oh man, Charlie was such a joy to work with, right?

Kevin: Charlie's great, Charlie's great, I love him.

Abhijit: He's one of those people, you'd ask him a question and most people will search Stack Overflow, and Charlie would cite the C99 spec chapter and verse, and then he'd tell you what your code would compile into, and then he'd cite the Intel spec chapter and verse. So working with it, I learned so much working from him.

Kevin: And yet also level of technical knowledge to go from, yeah, the programming language spec all the way to the machine code, but also like a product sense and a sense of how everything fits together and becomes something that like humans want to use.

Abhijit: Exactly, right? It was just a very pure way of thinking. You think about what humans want to use and then— people overuse the term first principles, but Charlie is a first principles guy, right?

And there was that ethos in Akamai Labs where it's like, at each stage, think about the actual thing, right? Read the language spec, read the RFCs, work with the customer.

We used to build demos for Akamai sales conferences. It was super fun. We do it for all-hands too. You remember some of those demos.

Kevin: Oh yes.

Abhijit: And building demos with that team was awesome because there was a really high technical bar that we wouldn't fake anything, we'd write good code, we wouldn't just hack stuff together. But we also wanted it to look good. We wanted to evoke that, ah, when you do a thing.

Kevin: Yeah. That moment where it clicks, the moment where you get it, the moment where you understand why you want to buy it or your customers want to buy it and why you should want to sell it to them.

Abhijit: Exactly. So like, did a lot of projects in Akamai Labs, really fun. And it was super fun, right? We had one of those aha moments.

I remember we were running a demo at an all-hands where we did a Skype call to India and it looked great. It was what everyone expected video should look like. It didn't look bad.

Then we hit a button and we switched it over to the thing we had built, running it through Akamai's platform, and it like, became crystal clear sharp.

Kevin: Ooh, nice.

Abhijit: And like, we didn't fake anything to make it happen.

Kevin: Right.

Abhijit: And like the audience went, whoa.

Kevin: Oh, nice.

Abhijit: And it was like, there's something really cool about that—

Kevin: Very satisfying.

Abhijit: —where you can, you know, you're doing technology and it's really, there's beautiful math, there's beautiful computer science and the end result is a human being going, wow.

Kevin: Yes. That emotional reaction, yes.

Abhijit: Yeah, yeah And it's not just that they're impressed, but it's like—

Kevin: They're moved.

Abhijit: One thing that has been a theme for me working on both enterprise and consumer products is I love building things that bring people together. And, you know, like when my dad came to the United States in the 60s, yes, there were long distance calls, but like really, when he was communicating with his dad back in India, it would be, you know, you write a letter and, you know, write an aerogram and mail it back.

Kevin: Domestic long-distance calls were expensive. International long-distance calls, I can't imagine how much they cost.

Abhijit: Yeah, and it's crazy. If you had gone back 50 years further than that, instead of a few days by airplane, it would have been a few weeks by ship. Go back a couple hundred years further than that, and it was not even possible, right? And now I can talk to anyone anywhere in the world as if I'm a few feet away from them, just like that. That's amazing to me.

And so that was really, that was this thing that I discovered I loved at Akamai that, in particular with the video chat product, was like, hey, I can be doing really cool computer science, cool distributed systems work, and the end result is something that lets people do something they couldn't do otherwise. That helps people connect.

Kevin: Yeah, yeah, yeah. That lets you say hi to your niece on her birthday from the other side of the world
and keeps that connection alive, yeah.

Abhijit: Yeah. So yeah, so like this, that was a formative experience. I think another really formative lesson I learned at Akamai that affected how I think about things is, Akamai was a really customer focused company, right?

Kevin: Almost to a fault, like, yeah.

Abhijit: Oh, yeah, like we, even if customers stopped paying, we would like not take them off the platform until we were sure they understood... I think my first week there, I found myself on the phone with a VP at IBM fixing something. And that really left an impression on me, that deep focus on the customer.

So Akamai was an amazing experience. Learned a ton there. I went to Google after that. This was back in 2016. Yeah, moved out to the Bay Area. Was working at Google on their [infrastructure] team and in cloud.

It was funny because when I started at Google, I was in the SRE org. And they're going through and they're teaching us how Google works. And I'm like, oh, cool. I've seen this at Akamai. You talked about convergent evolution
before, right?

Kevin: Yeah.

Abhijit: And it's like, oh, OK, this is like the West Coast strain.

Kevin: Oh yeah.

Abhijit: You solve the same set of problems, you're going to have similar solutions.

Kevin: Yeah, nice.

Abhijit: Google was also a really great place to be, right? I think it's like one of those places, like every software engineer should work at Google once, right? Beautiful systems, beautiful tools, a really open culture, right? It was like really cool to me that my first week there, I could just poke around and see how everything ran.

Kevin: Go into the internals of the search algorithm, and like, if you want.

Abhijit: Yeah, I mean there were small bits of that that were proprietary, but like everything else, like, yeah, like you could, oh, here's how google.com renders.

Kevin: Okay!

Abhijit: And I think that that really open culture was one of Google's strengths, right? Like people at Google have, or had, a very strong sense of ownership of the company's mission, a strong sense of loyalty to the good it could do in the world.

And all sorts of good, unexpected things come from that. You know, the... the development environment there was like... wonderful. You know, it's funny, right? Because I spent time building the video chat product at Akamai. I'm at Google, I'm like, oh man, I could have done this in a couple of weeks at Google.

Kevin: Oh, yeah!

Abhijit: But then Google has five different video chat and Meet products, so it's like, okay—

Kevin: And five different PMs did.

Abhijit: Right, yeah, exactly, right? It has its own set of problems. So it was it was a fascinating chapter of my career.

In 2017, I guess it was. I got a call from Twitter and they were kind of like, hey, you used to do this innovation stuff at Akamai. We have this innovation team in the Ads org. We're trying to affect a turnaround. Would you like to join us?

And, you know, I was kind of like, you know, ads pays for such a large fraction of Silicon Valley. It seemed intellectually honest to do it for a while.

Kevin: Sure.

Abhijit: I was also a little bit personally fascinated by it, right?

Kevin: Yeah, I mean, it was paying your salary at Google, right?

Abhijit: Yeah.

Kevin: Or like okay, cloud technically, but yeah.

Abhijit: I mean, ads was—

Kevin: Ads was paying for the development of cloud there.

Abhijit: Exactly, right? The reason there was a cloud to sell was because it had been built for ads and search.

When I was a kid, my dad worked at Procter & Gamble. And P&G, big CPG company. you know, we knew some of the PhD psychologists who used to work on ads there. And there was a fascinating part of brand— maybe I should back up and say what a brand ad is, right?

Kevin: Okay, yeah.

Abhijit: So there are roughly speaking, two different types of internet ads. There's direct response ads, which are kind of like, you see an ad, you click on it, and a thing happens. You click on it and money changes hands.

Kevin: That's, we have a new raincoat. You're in the market for raincoat. Try our new raincoat. It's 20% better than our competitors' raincoats.

Abhijit: Exactly.

Kevin: Click here, give us your credit card information. We will ship you a raincoat.

Abhijit: Click here, buy a thing. Click here, install the new Fruit Ninja app.

Kevin: Right.

Abhijit: It might even be something like click here and visit this website. Right?

Kevin: Even if that's just like, yeah.

Abhijit: Yeah.

Kevin: The goal is just getting eyeballs on the... yeah.

Abhijit: Exactly. And so direct response ads are very mathematical. There's a whole literature of things called second price auctions and things of that sort that Google and Facebook do very well.

Kevin: Oh, and you can price them because you know how much money you're making if they convert.

Abhijit: Mmm hmm. Exactly. And if you're an advertiser, you can see how much return you're getting on them.

Kevin: Right, right.

Abhijit: And so as long as you're getting a positive return on ad spend, you keep buying them.

Kevin: As soon as you start seeing a positive return you like crank the knob, or you start cranking the knob, you see if the positive return keeps following the knob, and if it does you just like turn the spigot on, until the money stops flowing.

Abhijit: Exactly, exactly [laughter]. And direct response ads are, it's very mathematical. It's a numbers game. So companies like Google and Facebook that have lots of eyeballs on what they do, they're gonna do a really good job with DR ads.

The other type of advertising is called brand ads. And this is what you think of when you think of TV commercials.

Do kids these days to watch TV?

Kevin: Uh, YouTube ads are still a lot of brand ads.

Abhijit: Yeah, YouTube ads. Or Super Bowl ads.

Kevin: Do kids these days still watch YouTube?

Abhijit: Yeah, Super Bowl ads are the canonical example.

Kevin: Okay.

Abhijit: But yeah, TV ads, radio ads, magazine ads. The idea behind a brand ad is not that like, if— it's not that, oh, if you see an ad for Tide, you're gonna click on it and buy Tide. It's if you see an ad for Tide, next time you are in the grocery store or shopping online, maybe, in addition to... All or whatever, you're going to consider Tide.

And then maybe because you considered it, there's a higher probability you might buy it. And then if you like it, maybe you'll buy it again. And then maybe you're a customer for life. And now P&G has made a bunch of money because they have a Tide customer for life. Right?

Maybe less so with Tide, but with other things, I think about like cars, for example, like also educating you about what kind of person owns a Lexus is incredibly valuable, in two ways, one of which is like, if you are, if that resonates with you, then you're more likely to go buy the Lexus, but also the signaling effect of everyone else knowing what kind of person it is that you are because you drive a Lexus is provides value to Lexus owners.

And I'm glad you brought up that example because that kind of, one of the things I was interested in with brand ads is like, the value of brand ads is really squishy. What is it worth?

The first example, the Tide example I gave you, companies like Procter & Gamble and Coca-Cola have done the math, right? There's an MIT study even about like what the effect of considering one more brand does on your lifetime spend for a brand.

Kevin: Oh, interesting. Okay.

Abhijit: And so like in Procter & Gamble's case, they actually have some science between, okay, they show you the ad for Tide, they know how much it's worth it to them for you to be a Tide customer for life.

The Lexus case is maybe a little bit squishier, right? Like you see a Lexus ad, like that alters your behavior somehow. Buying a Lexus is an infrequent event for most people, right? Like, you know, how do you quantify it? And then there's this whole continuum in the middle, right? Like if you see a Super Bowl ad, does it actually make you... more or less likely to buy Doritos or whatever.

Kevin: Right.

Abhijit: Like, who knows, right?

Kevin: Well, how many Doritos ads have I watched in my life? And, you know, what is the cumulative effect those have on my like purchasing decisions? At some point it becomes, yeah.

Abhijit: Yeah. Yeah, exactly.

So, you know, and on the flip side though, like brand ads are demonstrably powerful. There's this great article, I think it was in the New York Times in like 2008, that I remember quite vividly, where the WHO was trying to reduce the incidence of childhood diarrhea and other illnesses in a handful of African countries and they had all of these campaigns and they weren't working.

And so they partnered with, I think it was Unilever, they partnered with one of these consumer goods brands to say like, okay, let's use your brand ads tricks to fix this.

And so they developed a commercial together with the goal of promoting hand washing and a couple other things. And within three months of launching this ad campaign, cases of illness went down. It was a very clear cause and effect.

Kevin: Ooh. Nice.

Abhijit: We showed an ad and human behavior, human health outcomes were better

Kevin: Right. Yeah.

Abhijit: So I don't know, this was just kind of fascinating to me.

Kevin: Yeah.

Abhijit: And I was like, well, I don't see myself as someone who works on ads forever. I'm not passionate about ads, but it is kind of an interesting part of our life. Let me go do this, it'll be a good adventure.

Twitter, by the way, made most of its money off of brand ads because when you think about a social network like Twitter, it had fewer users than say Facebook, right? However, it has, had a lot of cultural cachet, right? Like a tweet might appear on the evening news. Celebrities use Twitter. So it had a very strong brand ads business.

Kevin: All the journalists are on Twitter, and are very active on Twitter, and so news breaks on Twitter, and so it's this cycle of relevance, yeah.

Abhijit: Yeah.

Kevin: But so it wasn't so much direct response ads, it wasn't so much like, you know, here's a raincoat.

Abhijit: It was both. It was both. But it was more brand. Most of the business was brand ads. It's actually kind of interesting.

What we did over time is we built up the direct response business as well. So both ended up being important. And you actually want both because brand ads revenue follows the economy, right? Like during downturn, there's gonna be less spend, whereas direct response is a little bit more consistent. And so they kind of counterbalance each other.

Kevin: Because, if you're doing a direct response campaign, you're, unless it's in the very early days, you're making money on it definitionally.

Abhijit: Exactly.

Kevin: And so yeah, yeah.

Abhijit: That's exactly right.

Kevin: Okay, nice. So what is being advertised in direct response will probably change depending on, you know, broader economic cycles and fads and, you know, the seasons and whatever, but like direct response campaigns in general would be less cyclic.

Abhijit: Exactly.

Kevin: Got it.

Abhijit: That's exactly right.

So I was working on this team called Brand Innovation. And, you know, any ads org has a lot of pieces, right? There's, you're going to have teams that think about building the interfaces advertisers use to create and manage campaigns. You're going to think about teams that run the distributed serving infrastructure...

Kevin: Almost none of which is visible to a consumer. I'm sitting on Twitter and I'm like, what are all of these people doing? Why does it take, I don't know, a thousand people to make ads show up in my Twitter feed? But I guess the answer is because there is this whole customer base who are the advertisers who need a ton of support for that in order to make those ads show up.

Abhijit: Yeah. Exactly.

Kevin: Okay, great.

Abhijit: Got it.

Abhijit: And you need your ad serving infrastructure to be really robust so that it doesn't slow you down. Right?

Kevin: I mean, that was also a classic thing at Akamai. One of Akamai's classic business propositions, if I recall, was for advertising. Because like, yeah, if your ad 404s and you get an empty box, that is a real bad day for everybody.

Abhijit: Even if it loads slowly, right?

Kevin: Oh, yeah.

Abhijit: You're much less likely to have a click
and a conversion.

Kevin: Right, yes. I might already have even scrolled past it, depending.

Abhijit: Yeah, so the ads need to be really performant. So serving and the interface for advertisers are important, but there are really two other things that are kind of the two levers you have with almost no ceiling that you can keep pushing on to increase spend.

One of them is all the machine learning stuff to do ads targeting. You know, this is like... you pick up Facebook and you're like, oh my gosh, how did they know that I wanted this exact thing, right? Like Facebook's ML is very good.

But there's also the creative, right? That's kind of the word we use for the ad format, the video, the picture, the text, like the actual content of the ad itself.

And you know, you think about like, hey, what's the difference between what you could do for a magazine ad versus a Super Bowl ad on TV? And it's huge, right? You think of all the crazy Super Bowl ads you've seen, like the medium of video gives you a lot of interesting space to play with. I imagine it was similar when you went from text classified ads in newspapers to, you know, glossy picture ads and magazines, right? Like it gives you a lot more.

So the ad format is the canvas, if you will, for building ads on, you know, Twitter or Facebook or whatever. And I was working on a platform for ad formats at Twitter. So several of us re-architected the fundamental platform that ad formats or card— which looked like cards.

Kevin: And a 'card' in Twitter parlance is, that's like the tweet which has a link in it that, where there's an image from the site and the headline or—

Abhijit: The thing that until a few weeks ago had the headline, and—

Kevin: Yes, exactly.

Abhijit: And so cards are an important part of the internet, right?

Kevin: Yeah, oh my god.

Abhijit: Like if you send a URL in an iMessage even, right? You'll notice it doesn't say HTTP colon slash slash whatever, it shows you that picture and that title. Same thing on Facebook or Discord or whatever, right?

That thing is called a card, and the way those cards are populated, There are things that if you go to a web page and you go to view document source and look at the HTML, there are these things called meta tags that you can specify. Oh, I want this image to show up. I want this headline to show up. I want this other information to show up.

So that when somebody sends out a link to your site, the platform that that link is on can go crawl the website, get the meta tags, and render a card.

So yeah, so if you think about a tweet, the beauty of tweets were that they were very simple, right? Just a little bit of text, and the embellishment you could have was either an image or a card. So cards really ended up being the canvas for ad formats at Twitter, you know, for most of the past 10 years.

And what we were doing, so the type of card I mentioned was like, hey, you send a link out and it renders a card. With ad formats, we actually had what were called stored cards. So an advertiser would go to ads.twitter.com or make a call to the Ads API.

And they could create cards that did more than what I just described with a link. So there was one card that was very popular, drove a lot of revenue, called the video website card. So it was a little video. If you clicked on it, it opened up, and the video would keep playing at the top of the screen. You could surf the product website underneath it while you're watching the video.

So sort of simple things like that, but by adding just a little bit more than just an image, it gave advertisers a lot to play with.

So there was increased demand for more flexibility in ad formats. And the old system at Twitter was such that each time an advertiser wanted a new type of card, like, you had to create a fundamentally new thing, right?

Kevin: Mmm. Okay.

Abhijit: So people would have to go and code in like, okay, here's the video website card.

Kevin: We want a poll card.

Abhijit: Exactly, a poll card, a media poll card, right? You know, media and vote on things. And you know, you'd do this, and you'd spec it out each time, and engineers would go implement a new card on all the platforms, you'd implement it on the backend. And you'd kind of wonder like, well, if I have a video website card and a video poll card, like, shouldn't it just be the website part and the poll part that I re-implement, not the whole thing?

Kevin: How composable can we make this?

Abhijit: Exactly, exactly.

So we moved to a component-based framework called unified cards, which was exactly that. So if you look at the Twitter Ads API today, if you create a card, you specify the components you want. Maybe you have media or a carousel, you might have a little bit of text underneath it, you might have a button, some of these components, and you can kind of mix and match them.

And of course, externally, we limit the combinations, right? Everybody can't do everything they want, there still needs to be some product coherence. But at least on the internal implementation, it's based on these components, so that implementation is fast.

Kevin: When an advertiser comes and is like, we want a new thing, you're not giving them the full flexibility because that's overwhelming, but you're like, oh yeah, we can mock that up very easily using the internal components and then hand that to them.

And then be able to maintain it on the backend rather than having to have each of these as its own special thing and have to, if you want to make a change to all of the poll widgets, have to go through all 12 different kinds of poll-using cards and fix them.

Abhijit: Right. That's exactly right.

Kevin: You just fix the poll widget, the one poll widget, and then, yeah.

Abhijit: Yup, yup.

So, you know, as we did this, you know, it seems simple, but you have to be really careful because the ad format is literally like the pointy tip of the spear for all revenue that Twitter would get, right?

Kevin: [laughs] Right, right. Until the launch of Twitter Blue!

Abhijit: [laughs] We'll get to that.

Kevin: Yes, okay.

Abhijit: But you were giving the Akamai example of, oh, if your ad has a 404, that's no good.

Kevin: You get the little broken image icon, yeah.

Abhijit: Yeah. If the ad loads too slowly, that's no good.

Kevin: Yeah.

Abhijit: Well, if the ad loads slowly, that's no good. But also, it's not enough that it has to look good to the user. It has to work end to end.

So any time you interact with an ad on the internet, there are all sorts of events that fire back that sort of prove that you looked at the ad, that then the company can go back to the advertiser and say, hey, you said that you would pay us for each video that a user watched at least two seconds of. Here are all the people who watched two seconds of video. Right?

Kevin: The billing side of this is huge.

Abhijit: Exactly.

And so there's a lot of complexity that even if the ad shows up and you see the video and it's great, If it didn't fire that event that says you watched it for a couple seconds, Twitter doesn't get paid.

And so we were really, so we had to be really careful with this. And when we actually rolled out the first version of it, we did an A/B test for the video website card where we had the old one, and then we had the new component-based one.

And we tested really extensively to make sure the performance was equal because it may not seem like a big deal, but, the last— when Twitter was a public company, it was making what, like $4 or 5 billion a year, right?

Kevin: Yeah, yeah.

Abhijit: A significant fraction of that, like almost all of that was through ads that used unified cards.

Kevin: Okay.

Abhijit: Like a significant fraction of that would have been some of these video ads, like, you know, say the video website card.

Kevin: If that represents 40% of $4 billion worth of revenue, that is a lot of money—

Abhijit: Right!

Kevin: —so don't fuck up.

Abhijit: If I have a performance drop that even drops that by 1%, that's a lot of money.

Kevin: Right, that's a lot of money. Yes, yes. A small percentage of a large number, it turns
out, is a large number.

Abhijit: Indeed, indeed.

So we had this really great cross-functional team, iOS, Android, web, backend engineers running this thing, running simulations. Things are looking really great.

And then one of the staff engineers, who was an iOS engineer by trade, but he and I would do a lot of data stuff together, he comes to me one day. He's like, there's something really weird happening in Turkey.

Kevin: Okay.

Abhijit: We're like, OK. So we look at the graphs. The graphs are looking great, except for Android devices in Turkey, we're seeing a drop in performance from the new unified card.

Kevin: Meaning ad performance.

Abhijit: Ad performance.

Kevin: Meaning like click-through rate or view rate or view duration or whatever.

Abhijit: Exactly, those numbers appeared to be dropping.

Kevin: Okay. And like, markedly, this wasn't just a little, you know, because there's always noise in these graphs, but this was all like, oh, this has just fallen off a cliff kind of thing.

Abhijit: Well I mean that was our first question.

Kevin: Oh, okay.

Abhijit: Is this just noise?

Because like you need to make sure to slice it the right way. Because first you ask, well, okay, we rolled this out in this version of the Android app. So who actually updated and is there an interaction between the version of Twitter you're running and the version of the operating system...

Kevin: So you have to drill down into the analytics—

Abhijit: To make sure.

Kevin: —to sort of like make sure that you understand what it is that you're seeing here.

Abhijit: Exactly, right?

Like that should be your first question. When you see an anomaly, you first wanna drill down and like make sure there is actually an anomaly, like understand what you're seeing. And, you know, Android devices are a little bit tricky, right, because there are so many different types, they can be running different versions of the operating system.

Kevin: Depends on what carrier you're on—

Abhijit: Yeah.

Kevin: Like the combinatorial explosion of all these different variables you have to think about is huge.

Abhijit: Yep.

So we spent a lot of time. We drilled into all these possibilities. And, nope. It was real, right?

Kevin: It really was, all Android devices, in Turkey.

Abhijit: Performance was down on Android devices in Turkey.

Kevin: Okay. That's a remarkably specific, you're like...

Abhijit: Incredibly specific.

Kevin: Why, why was he looking at this?

Was it just like, he happened to stumble across it or was he doing a more systemic sort of like look into Android performance?

Abhijit: He was doing a more systemic look at global performance.

Kevin: Oh okay.

Abhijit: And it was like he saw a little dip and he's like, is that noise or is it real?

Kevin: Oh sure.

Abhijit: He clicks in it. And so he had done the amount of drilling down to realize it was Android devices in
Turkey.

Kevin: And so he was doing this sort of more systemic look as part of the rollout of the unified cards to make sure that there weren't any of these anomalies and whoops, here's one.

Abhijit: Exactly. Exactly, right?

And this is, again, as I mentioned, the revenue implications were huge, so we were being extremely careful with this.

Kevin: Well, and, like, Turkey's not an enormous market, but it's not a small market either. Like, I don't know how it was for Android, or for Twitter, rather, but just like, in terms of number of people and like, GDP and all those things, like.

Abhijit: There's that, but there's a thing— So this was, it's funny, right? Working at Akamai and on infrastructure work at Google, when you think of scale, it means a very particular set of things, right?

Kevin: It means the world.

Abhijit: It means the world.

But there's, the things that you worry about varying are, you might think about BGP, you think about connectivity, you think about the... how different data centers operate. When you're at a consumer product company like Twitter, scale can mean something that has nothing to do with backend technology, right?

Scale can also mean, all right, this has to run on different languages. It has to run in different places. It has to run in places with different regulations, right? Building a product under GDPR, the European data laws, is very different than building a product in the United States. So, you know... you have this whole set of additional things to worry about with scale.

Kevin: And they're more the human dimension.

Abhijit: Exactly.

Kevin: We had the human dimension at Akamai, but we were much more concerned with, like, can these routers talk with each other? And did not have to so much deal with, yeah, other

Abhijit: Well, exactly.

And it's like, you know, you might, you probably read stories from time to time about a company introduces a new brand and then they have to do an exhaustive search to make sure the word doesn't mean anything offensive in any language, right?

So when we saw a performance drop in Turkey, it's not just, oh, you know, you do a risk analysis on how big the Turkey market is. But it's like, it's a canary in the coal mine thing, right? It's like, oh, when we thought about scaling this globally, there is something we missed.

And right now it's a dip in Turkey, but
like, could it blow up, right?

Kevin: There is some feature of, there is some fact about the world that was not accounted for in our mental mode, and—

Abhijit: Exactly.

Kevin: —shit, now what?

Abhijit: And so this was difficult because running everything in the simulator in the United States, things looked fine, right? Like I would go, I would say, we had the facility to change the virtual location of your device. I'd change it to Turkey. Everything seemed to be okay.

Kevin: You've got a simulated Android device, and you set your locale to Turkey, and the carrier to a Turkish carrier.

Abhijit: Yeah, and, I mean, you don't necessarily have all of those levers, right? I mean, in an ideal world, you'd simulate a Turkish carrier, you'd change the locale, you'd change the language, you'd change everything. It's difficult to do that at scale, right? Maybe you have the ability to do that. Maybe you just start with making your device geographically think it's in Turkey, if you think that's the problem.

Kevin: Well and how good is your simulation also turns out to matter.

Abhijit: Exactly.

Kevin: Are you faking the GPS as well as the locale of the device, for example?

Abhijit: Right, precisely.

And I mean, we had the ability to fake things like GPS. We didn't like have a VPN endpoint in Turkey that we could use to fake it actually being on a Turkish carrier. So we got to the point where we couldn't figure out what's going on.

Kevin: Okay.

Abhijit: And we started to think of like, you know the thing, like you hear hoof beats, you should think horses not zebras.

Kevin: Yes.

Abhijit: And it's, like, well, we thought of all the horses. So we started to like wonder, hey, is there some exotic thing going on, right?

Kevin: Right, yeah.

Abhijit: This was back when, you know, the Turkish government had been playing DNS games with a couple of websites where, you know, Google, their DNS servers, 8.8.8.8, like Turkey was doing some like BGP spoofing.

Kevin: Because a lot of people were using the quad-8 servers in Turkey to get around the Turkish government's sort of like bans on certain websites or...

Abhijit: So this was in the middle of that.

And we wondered like, hey, is there something fishy going on there? Like maybe when we're trying to send events back, they're not coming back to us because of some shenanigans and like...

Kevin: Is the government intentionally black-holing our telemetry or unintentionally black-holing our telemetry?

Abhijit: Yeah.

And particularly because Google was in the middle of it, we're like, this isn't happening on iPhone. It is happening on Android. Maybe Android phones, maybe there's something in the network stack that's doing something different. Because we were at our wits' end.

We were starting to think, hey, can we get money to send an engineer to Turkey to go debug this?

Kevin: Right, yeah. Yeah.

Abhijit: So in any case, we were at our wits end. And then the lead Android engineer on the project called us up laughing.

Kevin: Hmm!

Abhijit: So he was looking over the logs one more time, and things looked fine. And then something seemed funny to his eyes. So he stopped and looked in detail.

And he noticed the telemetry that was coming back, all of which were prefixed by the words "unified cards," because these were unified cards, that the 'i' looked funny.

Kevin: The 'i' in "unified cards".

Abhijit: Yeah, the 'i'. There's, something looked strange about it, as it scrolled through his terminal. So he dug into it and...

Kevin: What do you mean, the 'i' looked funny?

Abhijit: Like, it looked funny! And then he looked at it, it was missing the dot, right?

Kevin: Right.

Abhijit: Like you'd have a lowercase 'i' and it was missing the dot.

Kevin: Right.

Abhijit: And it's like, well, that's weird. Why is that happening? And then he runs it through the processing and he's like, sure enough, when we process those logs, like the events don't show up the 'i' doesn't have a dot and you're matching against an event prefixed by "unified cards" where 'i' does have a dot.

Kevin: Where the 'i' has a dot. Okay.

Abhijit: Yeah.

Kevin: So you were actually receiving the events, but you weren't able to filter them out because you were filtering for on some level the correct spelling, the string, the literal, and you were not filtering for "unified cards" without, with an 'i' that doesn't have a dot, which is, okay.

Abhijit: So here's what happened.

Kevin: Okay.

Abhijit: So in Java, right, like, you write code, it's pretty common. You spit out strings, words into log files as you go through, and you do logging for all sorts of things, diagnostics, billing, whatever.

And, you know, generally, engineers will do something like they'll have a file with a list of strings, and they'll... They'll say, transfer them all to lowercase to make sure there isn't some weirdness with uppercase and lowercase. And then you think, oh, I'm fine.

Kevin: And a string in this context is just any bit of text.

Abhijit: Any bit of text, yep.

Kevin: And it can be, as I think will become relevant in a second, it can be in any language. We do a lot of manipulation, like you say, uppercase to lowercase, all these kinds of things, but a string is just the word that programmers use to refer to text for very historical reasons that I don't even know. [laughs]

Abhijit: And generally, it's something we don't even, you generally don't write them directly in the code you write. So for example, when we internationalize a product and you have a set of text and you translate it into a hundred different languages, in the code, I would say like, hey, display the text that's supposed to go in this prompt. And then there's a file somewhere that says, okay, I'm in this country, I'm gonna display this in whatever language.

Kevin: And this is a feature of larger software systems. So if you're writing, you do a little Python course online, you're writing these strings in your file verbatim.

But as soon as you start working with a large enough software system where you have users who are using your product in multiple languages, then you will develop— you will use a framework which provides you with these internationalization and localization features which allow you to, in the code that you're writing, be like, you know, so I want to display a dialog box to the user that has, you know, this text in it, like you say.

And so when you do the "alert parenthesis", rather than just having the literal string in there, you do alert parenthesis and you call a function, which goes and consults that table that you're talking about, and which returns the correct version of that string for the user in the user's language, in the language that the user has configured.

Abhijit: Exactly.

But here's the thing. You do that for text that the user sees. You also have a set of text that you don't internationalize because you use it for your own logging. Like, "unified cards click". Every time there's a click on a unified card, I write to a little file, "unified cards click: true". Something, right?

Kevin: Yeah. It sends a packet to a telemetry server somewhere, or it sends an email.

Abhijit: Yes.

So the assumption you make, like your example about Python code was really good. The assumption you make is, if I'm showing something to the user, I call this file that does this table look up and displays the thing in language.

But if I'm just writing something out to a log file, the language doesn't matter. It should just be simple letters, right? And you don't even really think about language then, because you're really just thinking, oh, hey, I've got the alphabet. I write it out.

Kevin: Especially as an American programmer, you're like, maybe this is technically Unicode under the hood, but it could be old school ASCII for all we care.

Abhijit: Yeah. Well, and you probably, for those of us of a certain age, you probably got these habits when you just were worrying about ASCII, right?

Kevin: The 127 characters of the ASCII character set should be enough for anyone.

Abhijit: Right.

Well, I mean, you even think about this, like, when you look at text on a website, that was Unicode before the URL itself could be Unicode, right? Like, there was an assumption, if I type in iTunes.com, those are just the same Roman letters that, you know, you'd use whether you're speaking German or French or English or whatever, right? So, anyway, nobody thinks about internationalization for these kind of utility, backend things, logging, yeah.

And it turned out that was an okay assumption on the iPhone, but, the funny thing is that in Java, these functions, like toUppercase(), toLowercase(), that people use a lot for backend logging, are locale specific. So they are different depending on where you are.

Kevin: Okay. So that's like, the Java language has a feature which understands what locale the system is set to and will do theoretically intelligent things on the assumption that this text is in the user's language. Okay.

Abhijit: Exactly.

Kevin: Now, the funny thing there is even there, alphabets are the same in most places, right?

If I have a capital I and I call toLower(), it's gonna look like a lowercase i with a dot, whether I'm in America or France or Germany or India or even a place with a non-Roman language, right? Like if I'm in India and using Hindi or if I'm in Korea, like, a capital I is still a capital I, and it'll turn to a lowercase i.

Abhijit: You have a loan word. You're talking about your employer, so you use the word Akamai, and you're not gonna transliterate that. You'll just take that and write Roman characters.

Kevin: Exactly.

Abhijit: So this will work fine almost everywhere. But in the Turkish alphabet, there are two letter i's. There is one with a dot and one without a dot. And capital I without a dot, its lowercase counterpart, is lowercase ı without a dot. So capital I without a dot, that's like the English capital I.

Kevin: Oh, and so there's also a capital İ with a dot whose lowercase form is lowercase i with a dot.

Abhijit: Yes.

Kevin: [exasperated laughter]

Abhijit: So Turkish has two i's, but then that correspondence of uppercase to lowercase, you know.

Kevin: This is like the different between the N and Ñ in Spanish, the N with the tilde on top and the N without the tilde on top, yes.

Abhijit: Right, but it's worse than that. This is, it would be as if n without the tilde on top, its lowercase version was it with the tilde on top, right?

So I mean, that's a great example, but you wouldn't have the corresponding problem with Ns because lowercase n without a tilde is the counterpart to uppercase, yeah.

Kevin: Because they correspond to the same letters in the English yes, whereas...

Abhijit: Yes.

So if you Google this, you will find that this is something that countless Java programmers over the decades have encountered in other pathological ways. It's known as the "infamous Turkish i bug".

Kevin: Okay. You were not aware of the infamous Turkish i bug. [laughter]

Abhijit: [laughter]

Kevin: Until now!

Abhijit: Until now. But that's what it was. We had a capital I, and it turned into a lowercase i without a dot. And

Kevin: And so how did you figure out the existence of the infamous Turkish i bug? The guy who was looking at the logs was like, oh, we actually are getting the telemetry back, but something about it looks weird. And you get in closer and closer and closer until you're like, the dot's missing—

Abhijit: There's no dot.

Kevin: And then you back out and you're like, why is the dot missing? And what was sort of the next step from
that?

Abhijit: So at that point, it was fairly quick, because you're like, why is the dot missing? Well, at that point, you're like, all right, there's something weird going on with Unicode here. And then that gives you a hint to start thinking about locale. And that's when it comes out.

Kevin: It's interesting to me also I guess that Java would be doing that even for system level messages.

Abhijit: It was interesting to us too. [laughs]

Kevin: Right, yes! [laughs] Surprising and disconcerting, yes.

Abhijit: Yeah, yeah, like that's not what— and the funny thing is like, it's not what you'd expect because you look, you read the documentation, you see how it's used, like, you know, like nobody would want that, right?

Kevin: I mean, I assume that it's actually a feature of the Java String class.

Abhijit: Yeah.

Kevin: The first thing I did at my first job out of school was to implement Unicode support in our Java-based compiler. And so I got very familiar with the Java String class and... They, Java was actually one of the first major languages, I think, to have like really robust Unicode support. That was one of its selling points in the early days. And, so yeah, it is baked into the language library at a very fundamental level. But that is not necessarily what you want in this case.

Abhijit: Right.

And it's like, it's something you'd want to be explicit about, right?

Kevin: Yeahhhh.

Abhijit: Like you wouldn't want that to be set somewhere else and affect things, right?

Kevin: Yeah, exactly.

Abhijit: Like or have two classes, localized string and system, right? Things like that.

Kevin: Yeah, yeah, exactly.

Abhijit: So yeah, that was it. And we fixed it and the performance was great.

Kevin: Great. Okay, great.

So it was really just down to that, like, one little surprise, one little, I mean, well, okay, a couple things here. One is the behavior of the Turkish alphabet, and one is the behavior of the Java String class, both of which are, like, correct according to some, you know, some local perception of the world, but which are interacting in a way that you didn't, no one involved probably fully expected.

Abhijit: Exactly. We were totally cool with the Turkish alphabet.

Kevin: Right.

Abhijit: It was the Java String class we were pretty upset at.

Kevin: Yes, yes.

Abhijit: Yeah, and I mean, it's interesting, right?

Because there are little things that can bite you, and setting up the test environment to catch things like that is, in general, tricky. I mean, the funny thing is if someone on our team had spoken Turkish... right? ...And their phone had been set to Turkish, like we would have seen it quickly, but you don't always have that.

Kevin: Yeah. You would have caught that in testing, yeah.

Abhijit: And that's like a really, again, like that's an interesting property of working at a place where you have client code running on devices around the world and where, you know, all these sort of unexpected things can happen, right? It's definitely easier when you're running all the code on... in your own servers, and...

Kevin: The locale is set to whatever you set the server locale to and it never changes and yeah.

Abhijit: Yup.

Kevin: Yeah, that makes the logging.... Yeah, like dealing with remote logging is actually like a, yeah...

Abhijit: It's interesting, and it's funny too, because you end up with, maybe you have events that you kind of care about for your own telemetry. Then you have events that you need to bill on. Then you have events where like the customer is doing a thing they expect a response at. Each of those have different ways they need to be handled, right?

Kevin: And different levels of severity when they break, yeah.

Abhijit: Precisely. Yeah, yeah, like if my telemetry breaks, like it's not great, but it's not the end of the world. If the thing that advertisers use to bill us on breaks, like that's not good.

Kevin: That's a big incident. Yeah. That's all-hands-on-deck. Yeah.

Abhijit: Indeed.

Fortunately were able to avoid like a true incident because we were cautious in how we rolled this out. Another thing I learned at Akamai, but yeah, so.

Kevin: Not rolling everything out to the whole world at once.

I mean, at Akamai, we couldn't roll everything out to the whole world at once. Like trying to patch, you know, get all 200,000 servers to take updates in the span of like 30 minutes was just not going to happen. And.

Abhijit: Yeah, well, I mean, being thoughtful about how to roll things out, like a few years later with Twitter Blue, it was kind of interesting because we ended up, on the day we launched Twitter Blue, the product, you know, rolling out or using for the first time, like three different new microservices, simply because like there's all sorts of stuff that goes into subscriptions that you need stuff for.

Kevin: The ability to take credit cards, which had previously not been a feature of—

Abhijit: Yeah, talk to Apple, talk to Google. And definitely, a lot of these principles from Akamai and Google were what let us launch three new things on a day without any hiccups.

Kevin: Okay, yes.

Abhijit: But it has to be very carefully designed and rolled out.

Kevin: Yeah, yeah.

Abhijit: Well, and that leads to something really interesting that when you roll something out, you always do it gradually one way or another. And thinking about how to do it gradually is an interesting problem, right? Because you want to, yeah, like you wanna make sure you probe some of these things that are difficult, but you only probe the ones you want to.

For a lot of consumer products, if you read "Chaos Monkeys", right, talk about Facebook will launch stuff in New Zealand first, or when we launched the first version of Twitter Blue, we did it in Australia and Canada.

And it's because, like, okay, they're fairly similar countries, they have similar properties to the United States, they have only one or two languages. In the case of Canada, you have to internationalize.

Kevin: Well that lets you test your internationalization machinery on a language that you're more likely to have people on the team who speak before you switch, you know, try to internationalize to other—

Abhijit: Yeah.

Although that can be funny too, because everybody on the team spells color C-O-L-O-R, but the two countries you're rolling out in, it's C-O-L-O-U-R, so, yeah.

Kevin: Oh true, yes, yes.

They're like, we know that an American company launched this. Please just fix it.

Yeah, and trying to pick those test markets or ways to get useful signal without both overwhelming your ability to respond to it, but also... being able to understand what you're getting back.

Abhijit: Yeah, exactly, right?

At a place like Akamai or Google, if you're rolling out a backend service, maybe you do a few machines in a lot of regions in a lot of places.

Whereas if you're rolling out a consumer product, like you think about a few regions that have certain demographic characteristics, certain language characteristics, and it's an interesting problem thinking how you do that.

Kevin: Any parting thoughts from this experience? Like, were there any big takeaways for you
from this?

Abhijit: Oh, this was really fun.

I think, for me, it's funny because there are certainly incidents that are big and flashy and high adrenaline in the moment. And we had a chat before we did this. We've both been through a lot of those, sometimes together.

Kevin: Yeah!

Abhijit: You can talk about those and those are fun. But I think that sometimes the sort of slow burns and the times where you avoid things are really interesting. And just the different set of ways that things can come out is interesting.

Kevin: Well, and you learn from the near misses if you're paying attention to them, as much as you learn... This could have been a much bigger incident. Catching this in the phase that you caught it taught you a lot about the world and allowed you to avoid other things.

Abhijit: Exactly.

And I think it also, you know, the thing you said earlier about like, convergent evolution versus like Akamai being the Velvet Underground, like, those, like, the organizational culture aspect of this has always been really interesting to me, right?

Kevin: Mmm, yeah.

Abhijit: We were fortunate with this thing with unified cards, like, we had a handful of people who had seen other senior engineers do this at other companies. You know, and it's interesting because as I've talked to some of these folks who have gone on to other places, whether startups or big companies, as I think about my own experience launching Twitter Blue, you learn small cultural lessons from these in ways that are often not apparent to the outside.

But like in many cases, the reason other things go smoothly, it's from the lessons and kind of the cultural things that you learn and then try to model for others that come out of experiences like this. Like, how do you handle incidents? How do you communicate in incidents?

Kevin: Yeah. And how do you create an ethos where everybody gets that that's what you're doing, right?

Abhijit: Yeah, yeah.

Kevin: Is there some place that people can find you online?

Abhijit: Yeah, so I'm on both LinkedIn and the social network formerly known as Twitter. My handle is Abhijit C. Mehta on Twitter and Mehta Abhijit on LinkedIn.

Kevin: Great, great.

Abhijit: Yeah, always happy to connect, chat with people.

Kevin: Awesome. And what are you working on now?

Abhijit: So I am doing some consulting. I'm cooking up a couple ideas, fleshing out, deciding whether or not I have enough conviction to start a company around them, and also considering a couple interesting options.

I'm also, my third kid was born in December, so I've been taking some paternity leave over the past year. I was telling Kevin earlier, we drove around the country this summer, 7500 miles on the minivan.

So life is an adventure. There's always something fun to do.

Kevin: It really is. Yeah. And it's good that you get some time to do that following, yeah... a good palate cleanser after the...

Abhijit: Yeah, Twitter was a good run. I had a lot of fun there. I learned a lot of stuff. I'm proud of the things we built, had a good time.

Kevin: It sounded like there were a lot of good people there and when y'all were cooking, you were cooking... like the best parts of it were really, really good.

Abhijit: Exactly, right?

And there was a strong culture of people who really cared about our users and cared about each other. And yeah, it was a fun place.

Kevin: It's been sad the circumstances under which that has come out, but that has been one of the things that I really took from, especially in the first few months, I called it the combination Viking funeral slash Irish wake that we all had for the platform on the platform.

Abhijit: [laughter]

Kevin: But the people from Twitter speaking up about, this was something that I did at Twitter, the story about the guy who got the character limit raised and who just like, made that happen. And... the story about how the logo was designed and all this kind of stuff. Like, yeah, all the really good, smart, dedicated people who built that.

Abhijit: And there's been a lot of optimism that's come out of it, right?

I think a lot of the folks who were there are off creating new things that are awesome. We're all rooting for our colleagues who are still there.

In general, I'm an optimist. I think there's a lot of exciting things to come.

Kevin: Yeah, exactly. Well, and moments of change like this are, you know, as hard as they are, as much as they suck, are the fertile ground from which the next, you know, thing springs. I think we have both been through enough cycles over this now to be able to see that.

Abhijit: Yeah.

Well and this is the subject for another episode or another podcast, but honestly, I think that there's a lot of really interesting ground to be trod with social media. I was talking before with video chat how I love bringing people together. And for all of the highly publicized negative effects of social media, right? Doom scrolling, and the easy formation of online mobs and things like that.

I think it's also really important to remember that, social media allows people to connect in ways that weren't possible. It allows communities who would have never been able to talk to each other to come together and have a place. It allows information to flow in a way that wouldn't exist otherwise, right?

Kevin: I think it's easy for us, you know, this deep in, to take a lot of the positives for granted and it's, yeah, so.

Abhijit: Well, it's like, yeah, and I think that's the thing. How do we encourage more of the positives while also being realistic and facing the challenges head on?

Kevin: Right. Less of the negatives, exactly. Like anything.

Abhijit: Yeah, like anything, exactly.

Kevin: Awesome. Well, thank you so much, Abhijit. This has been a ton of fun.

Abhijit: Yeah!

Kevin: This has been the War Stories podcast on Critical Point with Abhijit Mehta, formerly of Twitter, now fun-employed and pursuing new opportunities.

Til next time.

Abhijit: Thanks.

Kevin: Thank you all so much for watching and listening!

As you could no doubt tell from our conversation in this episode Abhijit's career has been focused on driving innovation inside bigger tech companies, and he asked me to mention that he's currently consulting for companies who are, quote, "trying to inspire innovation and increase the product velocity of their engineering teams," so, if that's you, please do reach out to him at Mehta Abhijit on LinkedIn or Abhijit C. Mehta on Twitter, those links again for you there.

I would work with him again in a heartbeat, as I think was probably obvious, and I cannot speak highly enough of him and his work.

If you're watching this on YouTube, please like and subscribe down below. And if you aren't watching this on YouTube, you might want to check the channel out anyway.

This was such a good interview that I could not fit all of the good stuff into one hour-long episode, so I am going to be posting some really cool outtakes exclusively on YouTube, and so check the channel out and get subscribed and I am excited to share those with you too.

Also full edited transcripts and the audio version remain available at war stories dot critical point dot T-V.

If you have a favorite weird language fact, either about human languages or about computer languages, either works, please leave them in a comment below.

If you have an incident story you'd like to tell, please email us at hello at complex systems dot group. Not exclusively but particularly if you aren't a cis white dude, because... obviously.

Intro and outro music is "Sempai Funk" by Paul T. Starr, you can find me on Twitter as at Kevin Riggle, on Mastodon at Kevin Riggle at I-O-C dot exchange, and I've just added BlueSky at Kevin Riggle dot B sky dot social.

My consulting company, Complex Systems Group, is on the web at complex systems dot group.

And with that folks, til next time.

He Broke Turkish Twitter - Abhijit Mehta
Broadcast by