Streaming Audio: Apache Kafka® & Real-Time Data

Next-Gen Data Modeling, Integrity, and Governance with YODA

Confluent, founded by the original creators of Apache Kafka® Season 1 Episode 261

In this episode, Kris interviews Doron Porat, Director of Infrastructure at Yotpo, and Liran Yogev, Director of Engineering at ZipRecruiter (formerly at Yotpo), about their experiences and strategies in dealing with data modeling at scale.

Yotpo has a vast and active data lake, comprising thousands of datasets that are processed by different engines, primarily Apache Spark™. They wanted to provide users with self-service tools for generating and utilizing data with maximum flexibility, but encountered difficulties, including poor standardization, low data reusability, limited data lineage, and unreliable datasets.

The team realized that Yotpo's modeling layer, which defines the structure and relationships of the data, needed to be separated from the execution layer, which defines and processes operations on the data.

This separation would give programmers better visibility into data pipelines across all execution engines, storage methods, and formats, as well as more governance control for exploration and automation.

To address these issues, they developed YODA, an internal tool that combines excellent developer experience, DBT, Databricks, Airflow, Looker and more, with a strong CI/CD and orchestration layer.

Yotpo is a B2B, SaaS e-commerce marketing platform that provides businesses with the necessary tools for accurate customer analytics, remarketing, support messaging, and more.

ZipRecruiter is a job site that utilizes AI matching to help businesses find the right candidates for their open roles.

EPISODE LINKS

Kris Jenkins (00:00):

In this week's Streaming Audio, we're talking to two platform engineers, Doron Porat and Liran Yogev about how their platform and their understanding of what they're supposed to be building has evolved. Because a few years back, the heart of their platform was Apache Spark. They very reasonably thought that their job was to make life easy for Spark developers. But as they started to succeed with that, they realized there's actually a bigger picture going on when you're developing a platform team. Because the people writing data into the system may be getting their job done, but they aren't really successful until the people reading it back out for whatever purpose are successful. The readers need to be enabled by the writers.


Kris Jenkins (00:46):

They need to be able to answer questions like, what data is available? What does it look like? Where did it come from when I need to debug it? What's the quality of this data? Readers need tools to be able to answer those kinds of questions, these lineage schema, governance questions without having to have a conversation with the writers every time. That takes a good platform team seeing the bigger picture.


Kris Jenkins (01:13):

That's what this week's episode is really about. Leveling up from being a first order platform team that's enabling individual departments to get their work done to being that second order platform team that keeps the whole company moving. It needs some technical changes. It needs some supporting tools. It needs some mindset changes around the organization. Doron and Liran are going to talk us through their journey on it. In some style, I have to say, these two have got a really good rapport. It was a great phone conversation. Streaming Audio is brought to you by Confluent Developer, more about that at the end. But for now, I'm your host, Kris Jenkins. This is Streaming Audio. Let's get into it.


Kris Jenkins (02:00):

I'm joined today by two partners in crime, Doron and Liran of, is it Yotpo? Am I pronouncing that correctly?


Doron Porat (02:07):

Yeah, I have 50% of us are from Yotpo. Yeah.


Liran Yogev (02:09):

Yes. The other 50 are from a company called ZipRecruiter.


Kris Jenkins (02:13):

ZipRecruiter, okay. We're going to have to go through that. But we're mostly talking about the data infrastructure stuff you've been doing at Yotpo, right?


Doron Porat (02:17):

Yeah.


Liran Yogev (02:17):

No.


Kris Jenkins (02:21):

No. Okay.


Liran Yogev (02:21):

Yeah. We're going to talk about both, I suppose.


Kris Jenkins (02:23):

This is going to be great. You've already hijacked the direction we were going in. It would be more fun, I can tell.


Liran Yogev (02:27):

Yes, this is going to be an improvised session.


Kris Jenkins (02:29):

Okay. In that case, let's start off. In two sentences, let's start with you, Doron. Tell me what Yotpo do.


Doron Porat (02:37):

Okay. Yotpo as a B2B SaaS company. We build an e-commerce marketing platform that serves e-com businesses with different products that we offer as part of the platform, with very strong synergies among them. These products can be review solution, communication channels such as SMS, messages or emails, customer data platform to do segmentation over referral programs, loyalty and so on and so on. Yeah, we're based at Tel Aviv, but we're a global company.


Kris Jenkins (03:17):

Okay. If I understand that from a man on the street perspective, you are doing, if I've got an e-commerce shop, you're the people who would do a mixture of customer analytics, remarketing, support messaging and all those things I need to make the business actually marketable when I'm selling stuff.


Doron Porat (03:29):

Yeah, but it's a SaaS solution that's like onsite, widgets and all kind of a B2B interface that includes analytics as well on all your data. But yeah, that's basically the point.


Kris Jenkins (03:43):

That's one of those things. I think from my point of view as a programmer, it's almost more interesting, the infrastructure. There's a huge amount of data flying around. From a programmer's perspective, it's a huge infrastructure business.


Doron Porat (03:57):

Yes, for sure.


Kris Jenkins (03:58):

As much as it is a marketing platform, right?


Doron Porat (04:00):

Yeah. We have all the B2C activity with the shoppers and visitors, that's a huge amount of data. Then again-


Kris Jenkins (04:06):

Yeah, Christmas traffic.


Doron Porat (04:07):

... we have all the orders, products. Yeah, we have November holidays coming in.


Kris Jenkins (04:11):

Oh, yeah. You've got Black Friday and Christmas coming up and all that fun.


Doron Porat (04:14):

Yes, every Monday.


Liran Yogev (04:15):

We hope we all have that, right?


Kris Jenkins (04:16):

Right. Good job we're recording this podcast before that tsunami arrives.


Doron Porat (04:20):

Thank you.


Liran Yogev (04:22):

They're going to be super successful. It's fine. Nothing bad is going to happen, right?


Kris Jenkins (04:26):

Yeah. We're going to understand how your infrastructure can scale to that kind of load.


Doron Porat (04:32):

Okay, cool.


Kris Jenkins (04:34):

Then we move on to you, Liran. You were saying your ... Oh, God, I've already forgotten the name. I'm terrible.


Liran Yogev (04:41):

Yeah, we used to work together up until three, four months ago. Then I left. I went to work for ZipRecruiter, which I've been working for the past couple of months. Yeah, but we're still working together. We have a podcast that we host together. It's in Hebrew. You can check it out if you like. It's called a Data Swamp, but under that, we're still buddies, I suppose.


Doron Porat (05:04):

Yeah.


Kris Jenkins (05:06):

You did a talk recently at Current about the infrastructure for Yotpo and moving into data modeling infrastructure choices you made along the way, right?


Doron Porat (05:15):

Yes. It's the modeling platform that we've dreamed of while we're doing things otherwise.


Kris Jenkins (05:25):

Maybe we should start there. Give me an idea of what state it was in before you started on this reinfrastructure project.


Doron Porat (05:33):

Okay. Basically we started, it was 2016, I think that we've built this self-service modeling infrastructure based on Spark.


Liran Yogev (05:45):

You can call it just ETL.


Doron Porat (05:47):

Yeah. It's an ETL framework. We based it on Spark. All people had to do was write their SQLs into Yaml files and configuration, spin up their Airflow DAG or whatever it is. They can run streaming jobs as well. We were very, very focused on enabling generalist developers and building their own data pipelines to feed our data lake and to offload all kinds of processes that were very heavy on the service side onto the data lake and analytics, of course. Because it also addressed BI developers, analysts and whoever can write SQL basically.


Doron Porat (06:24):

Back then and up until a few years later, we were really focused on just build, guys. Just to build, give us more.


Liran Yogev (06:31):

I think we were so proud of this. We actually open-sourced this project because we thought everyone can enjoy it. We actually did talks about it. We were really proud of this project. Just super successful by the way. We had hundreds of data pipelines created this way.


Kris Jenkins (06:43):

What's it called?


Liran Yogev (06:43):

It's called Metorikku.


Kris Jenkins (06:43):

Metorikku.


Doron Porat (06:50):

Yeah, Metorikku.


Kris Jenkins (06:50):

It sounds like somewhere I could go on a holiday.


Liran Yogev (06:53):

Or if you like a place called Metric in Japanese, fine, you can go on holiday there, yes.


Doron Porat (06:57):

It's actually metric in Jap ... It's true.


Kris Jenkins (07:01):

Oh, I see. Okay. Metorikku.


Doron Porat (07:01):

Yeah, it's catchy.


Liran Yogev (07:01):

Is it catchy though?


Kris Jenkins (07:01):

I wasn't expecting to learn a little bit of Japanese in this podcast.


Liran Yogev (07:04):

Yes. I see, I see. We don't know Japanese. It's important for us to know ...


Doron Porat (07:08):

Just this one.


Doron Porat (07:09):

Yeah, it was very, very successful. But I think that with time, I think it's a global data thing that we kind of understood that there's more to just enabling people to ...


Liran Yogev (07:21):

Producers.


Doron Porat (07:23):

Yeah. That's another thing. We are super, super focused on producer's side, which I think was juvenile. We just wanted to be popular and getting developers happy, building those pipelines. But we didn't really think of the whole downstream effects of whatever pipelines we were creating.


Doron Porat (07:41):

Yeah. You can simply call it a data swamp. Of course, it wasn't that extreme, but we lost control over what was being created and produced. All the different producers, they were writing these Spark jobs, they weren't really aware of what exists elsewhere. I could be building a certain pipeline and Iran would build the exact same pipeline only with slightly different names or slightly different metrics. We wouldn't be aware of each other. It's really a matter of governance, but also a matter of understanding the holistic picture of what happens to the data, where it comes from, where it goes, who's using it and continuency in that term.


Liran Yogev (08:21):

Yeah. There was no centralized place to create those. To see what there is, so not really a good ...


Doron Porat (08:27):

Actually decentralized it all the time.


Liran Yogev (08:29):

Because again, we were so focused on our engineers, we wanted to give them freedom like microservices. They can create whatever they want in their environment. They don't really care about each other, so they don't have usability. You don't have a centralized space. You see everything. You don't understand how to consume data. Again, we were missing a lot when we wrote this.


Kris Jenkins (08:44):

You've gone from that first order problem of we can't even write the data to, oh my God, we've written all this data and we don't actually know what we've got, right?


Doron Porat (08:51):

Yeah, exactly that.


Kris Jenkins (08:53):

The second order problem is actually managing it at scale.


Doron Porat (08:57):

Yeah. I think maybe two other things. One, we also started to think about developer experience, how fast it is to ship data to, I don't know, to production and have it ready and available all the way up to analytics or the application that's using it. We wanted to improve that.


Doron Porat (09:17):

The last thing I think is the coupling that we created between our business logic, like the organizational business logic and Spark, which is great, but I think that for a really long time we were positive Spark is the way to go and then we shouldn't look right and left. But as time went by, we started seeing that it's not the perfect solution for everything. Then we realized that everything we've written down in Metorikku is Spark. It's Spark SQL, but it's totally coupled to the underlying technology.


Liran Yogev (09:50):

I think it's really apparent in the streaming world where Spark's structured Streaming, which is the Spark way of doing streaming, is basic in a lot of ways. It's not as advanced as us other streaming engines. That's where we thought, "We want something else here. We're going to have to write it outside of our common way of writing data pipelines, which we really hated because wait, so now we're going to have two methods of writing it out." That's the challenge where it triggered us. "Okay, there's something wrong here."


Kris Jenkins (10:18):

Right. Yeah. Just quickly, for those that don't know it well, how would you categorize Spark? What does it do? What's its strength and weakness?


Liran Yogev (10:29):

Spark is a distributed compute engine. It is, I suppose helpful to perform batch operations on top of data, on top of small to big data. It does that pretty well. It has a lot of optimization along the way to help create very complicated data pipelines on top of very large data sets. It's infinitely scalable.


Doron Porat (10:57):

Batch and micro batch.


Liran Yogev (11:03):

Batch and micro batch, yes.


Kris Jenkins (11:03):

Yes. Batch and micro batch, yep. We were recently thinking about trying to coin the term nano batch for five records at a time and really ...


Doron Porat (11:10):

I like it. It's actually a good solution for a lot of problems.


Doron Porat (11:15):

Yeah. I think there are few things that are problematic with Spark. Some are solvable, other just in technologies, some aren't. But first of all, it's not that easy to understand for people who do not know and understand Spark, which is part of the thing that we did. We made Spark available. But once things break, they go to the data team and say, "Oh, it broke. Can you help me read the logs? I don't understand what happened."


Doron Porat (11:43):

You need to be able to dig in into the process to understand where things went wrong and what you should change and all those beautiful configurations that you put into your job and configure it, you have to ... We really tried to generalize this. I think we did a really good job in generalizing, but still you have those edge cases where you need more and it's not for the common developer.


Liran Yogev (12:05):

Yeah. For example, when I moved to ZipRecruiter, it's very different here. Here, we have people called data engineers. It's different than what we used to have at Yotpo. These people that are really experts in Spark and experts in data technologies, there's a lot of knowledge to have. When we try to make it simple, we actually reduce a lot of the knowledge you have to know, but also a lot of the really cool features that you can do if you know Spark and if you understand the technology better. This was different here when we work.


Doron Porat (12:32):

Yeah. You gained velocity across a large crowd of developers that can move forward now. But you meet it again in terms of cost and performance ...


Liran Yogev (12:43):

Optimizations.


Doron Porat (12:43):

Yeah, and scaling the separation in a responsible manner. The second thing I think is that sometimes we would feel that Spark acquires a lot of resources for something that would not require that many resources under a different technology. Sometimes it just feels really bulky and heavy.


Kris Jenkins (13:00):

Give me an example.


Liran Yogev (13:02):

This huge hammer that you can put on a lot ... I think a lot of organizations have this. You can solve a lot of problems with Spark. It can do everything. You can call APIs and you can use it when you do streaming. You can do really, really weird things. By the way, on top of really small data sets in some cases, but it's also a very generic way to solve problems.


Liran Yogev (13:22):

But as Doron said, it is extremely heavy. There's always has to be a cluster. The cluster has this minimum size that it needs. It has a long startup time. It's like if you look at technologies and try to solve things faster so that you can take KSQL for example, it's very different from that. It's a more heavy lifting type of tool.


Kris Jenkins (13:42):

I remember working for a company years back where they would end up doing a Hadoop MapReduce job that took an hour to run to add up a thousand rows to get a total.


Liran Yogev (13:54):

Spark is the next generation or previous, I don't know how to call it, but it's better than MapReduce, but still it's another type of tool that is ...


Kris Jenkins (14:04):

A sledgehammer to crack a nut.


Liran Yogev (14:05):

Yeah.


Doron Porat (14:06):

Yeah. I think it's ...


Liran Yogev (14:08):

Yeah. But they're improving that way on that area. I don't think they'll remain the same over these. But right now ... Sorry, you're starting to say ... I'm so sorry.


Doron Porat (14:16):

I don't remember what I wanted say. It's okay. But no, I agree with Liran on what he said. But I think we saw, even in terms of streaming, then we had a lot of problem with Spark as well because of the micro batching and scaling wasn't perfect. We were basically running our own self-managed Spark clusters, still do in most cases.


Doron Porat (14:42):

We always have problem with scaling well and reacting to changes in the amounts of data coming in and in the oncoming batches, which we didn't want to solve with just adding more machines to ignore that problem. Streaming started not to feel really like streaming. I remember years ago when we tried to address the problem of updating or updating data files in the data lake, we want to update Parquet files with CDC data. We use Spark with Hudi. It just didn't feel right. Things have changed. That's three years ago, but the batches were so long and we were running streaming jobs. It took 40 minutes to process a batch.


Liran Yogev (15:30):

Yeah, that was insane.


Doron Porat (15:32):

Yeah. That's where we felt that maybe different workloads require different solutions, which is different than what we used to think up until then.


Kris Jenkins (15:40):

Right. Yes. You were trying to one size fits all to solve this problem. What did you do? What was your first step to move away from that?


Liran Yogev (15:48):

We didn't start by moving away from this specific problem about Spark. Spark is still major in both companies by the way. It will continue to be, because again, it's a really great tool to solve a lot of different issues, but we can use other tools that are more hyper-specialized for specific use cases.


Liran Yogev (16:04):

By the way, Yotpo, one of the tools that we actually did use for streaming and one of the problems that Doron just mentioned about the updating in the data lake and updating really fast, arriving data from CDC streams, we used Upsolver for that because they did a really good job with the minimum amount of resources needed for that. That was a really good solution back then.


Liran Yogev (16:21):

But again, we didn't came directly because of, we wanted to just switch to another technology other than Spark. We started looking at the entire problem that we talked about governance and about losing all that metadata and about understanding and reusing more and more data sets and not recreating them by different teams.


Doron Porat (16:42):

Just a small remark. It's not that we wanted to replace Spark altogether or do it now. We wanted to have the ability to have this agility.


Liran Yogev (16:50):

The freedom.


Doron Porat (16:51):

Yeah, the freedom, thank you. That we can move away from Spark and not lose all the beautiful stuff that we've created. We want all the infrastructure to be in place and not to be totally coupled to Spark.


Liran Yogev (17:02):

The mission was actually decoupling and not switching to a different technology. Once we did a decoupling, now we have that freedom.


Doron Porat (17:08):

I think it's funny because one of the things that we used to say to ourselves all the time in the infrastructure group, "We're all about decoupling. Everything in the data platform is decoupled. Yeah, every solution is very dedicated around a specific need and a requirement." We didn't even notice that we had this crazy coupling in the modeling area. I think this is the biggest aha moment we had.


Kris Jenkins (17:34):

By coupling in the modeling area, do you mean that your models were tightly coupled to Spark's way of doing things?


Doron Porat (17:39):

Yes.


Liran Yogev (17:40):

Yeah.


Doron Porat (17:40):

And reading on Spark SQL and everything.


Liran Yogev (17:44):

It's not just about Spark. It was coupled to the execution. It could have been a different technology. We could have used Snowflake, for example, to run our batch or streaming jobs, but at that time it was Spark. But again, part of the configuration of a model was, how are you going to execute it? What's going to be the output format? These things that should be probably decoupled then your business logic. That was where we started from.


Kris Jenkins (18:08):

You've ended up with a mixture of data and how to deal with it rather than just pure data. I always think that data is the one thing that decouples, right? If you can make the connection between two systems, just data, that's as decoupled as you can possibly get.


Doron Porat (18:24):

I think that we also thought about, "Okay, eventually, we're just creating Parquet files." Anyone can use these Parquet files. It's not coupled to a certain engine. But going a step back, the actual modeling itself, the actual process ingesting and digesting the data, that part should be decoupled. That's the part that we wanted to keep across different implementations.


Liran Yogev (18:51):

I want to also add, because I didn't think we talked about it before, but the focus on consumers is something that was really important. It's not just about the decoupling, it's about also being able to describe things so the consumer can consume data better. You talked about data and being the ultimate decoupling, but what is data? Data is by the basic properties, it's just a bunch of files that have maybe a schema defined somewhere. Someone can consume it with some different technologies.


Liran Yogev (19:16):

That was not enough for us. For example, one of the things you wanted to know like, "Okay, who is the owner of this data? Who owns this data set? How can I contact them?" Or for example, "What is really the contract for this dataset?" Another question I want to know, is this field aggregatable? Can I do a sum on top of it? Can I create my ... It's these questions about consumption that we really didn't answer when we just write this dataset somewhere and someone can consume it.


Doron Porat (19:46):

I feel like we were talking on a ... Just finish one more thing. Yeah. Data catalog, they kind of solve this because it's much more observable and you have all the information there. You can consume it. But I think it's more of the notion for the data producer to be aware of the consumer and have this discussion while composing this data pipeline and not after the fact.


Kris Jenkins (20:10):

This is reminding me of data mesh ideas, this one of the ideas data mesh where you treat your data as a product that you plan to make available.


Liran Yogev (20:19):

Yeah, this is kind of the the same, but I think our thoughts came in before. No, I don't know.


Doron Porat (20:26):

We invented it. We just had a different name, not as good.


Liran Yogev (20:29):

We are very lazy, so we didn't really read data mesh, the entire thing. But once we saw data mesh was out and people talking about it, we were like, "Oh, that's kind of what we were doing."


Doron Porat (20:39):

Yeah. It really resonates to what we do. But I think any other thing, it's very important to note the thing. It goes for data mesh and whatever it is that we are trying to do. Infrastructure is one thing. It should be opinionated and real thought of, but then you have processes and organizational processes that you have to take care of to bring this to life. Infrastructure is not enough. I think that that's also a concept that data mesh is constantly talking about. This goes for every important thing that you want to. Like a transformation for the organization that you want to drive.


Kris Jenkins (21:13):

Yeah. I actually find this reassuring about data mesh that you could easily dismiss data mesh as just a buzzword. But the fact that so many people seem to be independently rediscovering some of these principles makes me think there's genuinely something in it that we need to be paying attention to.


Liran Yogev (21:32):

Oh, it's very real. It's problems ...


Kris Jenkins (21:34):

It's like the conclusion you'll come to if you keep searching.


Liran Yogev (21:38):

Yeah. I think it happens when the organization increases in size and increases the amount of data. Not the actual size of data, but actually amount of data assets that you have. It becomes so complicated and the ownership model just don't work anymore if you do that.


Doron Porat (21:51):

Ownership. We didn't talk about the ownership. It's a big thing also in data mesh and what we were trying to do.


Liran Yogev (21:58):

Yeah, that's probably the ultimate goal for everything is that we want people to really own their data. Then we ask, what is actual ownership? Again, just to answer your question. Yeah, it definitely resonates and it's definitely reassuring. I think in our podcast as well, we hear so many organizations struggle with these problems. Because as it scales, it just becomes this really insane thing and very complicated thing to produce and consume data.


Kris Jenkins (22:24):

Yeah, it can be. If I'm understanding this correctly, in your case, you've got a particularly thorny issue in which your producers and consumers aren't actually developers in your company. They can sometimes be third parties writing jobs. Is that fair?


Liran Yogev (22:41):

I think for both of our companies, it's not one of the problems. One of the problems is, and I think that again, it's the size of the organization that dictates this, is that so many people are able to produce data. We should be allowing them to do that. If we're an unhealthy organization, we would allow only to a specific set of people to write data.


Liran Yogev (23:00):

Then it will be probably easier to do everything because it's always like this bottleneck. But you go through everyone and they're a bunch of people that are really interconnected. But in organizations, I think so many different types of skill people ...


Doron Porat (23:14):

Persona.


Liran Yogev (23:15):

Yes, personas can create data. We want them to create data because they're doing it for different levels of organization. Engineers produce data either to make their product data accessible, to create different features on top of data or do ML and data science things. While analytics wants the data to do analysis, to answer to reports. They also want to create these data sets. They may not be for many, many use cases, but they may fit your use case or other people from their department use cases.


Liran Yogev (23:43):

Then you have BI developers or data engineers, which are also another beast in this area that produce data for different use cases.


Doron Porat (23:52):

I really like to call it, we're talking about data democratization all the time and people should have access to the data, but this is more than that. It's like data tooling democratization. Everyone has the right and the ability to build their own data pipeline and we should enable that. It's part of enabling growth.


Liran Yogev (24:12):

To answer your question, we don't have third parties creating data inside our systems, but it's just a lot of different people writing data.


Kris Jenkins (24:18):

It's so many different teams that in a way it structurally almost behaves like external. A large enough company has actually several small companies quite often.


Liran Yogev (24:27):

Yeah.


Kris Jenkins (24:28):

Okay. I understand that now. As you say, there's a lot of different personas that you have to get them to think about the data they're happily writing differently and provide them and the consumers more awareness. How do we even begin to tackle that?


Liran Yogev (24:47):

Actually working right now on this framework at ZipRecruiter to help with this process. But I think that this word process is the beginning of everything. Hopefully, the infrastructure and the tooling can help with facilitating this. But the first thing you need to do is ask these questions. You have to talk to people. It's just like designing any type of product. You say data as a product. It's a real thing.


Liran Yogev (25:08):

Product managers, analysts, BI developers, whoever is writing the data should own it. The first part is actually talking to people, understanding the consumption patterns and making sure that in the end, this dataset will be usable. What our infrastructure is actually doing is pushing into that direction in a way that you're going to have to describe a lot of different things when you write a data set. You're going to have to describe the owner. You're going to have to describe the schema. You're going to have to describe how you do aggregations.


Liran Yogev (25:38):

If you're going to do all that, they should probably be aware of your consumers. One other part's very important is now since we did a really good job with this new framework that we built, there's lineage, so you know who consumes you and you know what kind of things that you consume. The lineage part is super important for everyone to be aware of their surroundings. Again, it's a process where the infrastructure help push that.


Kris Jenkins (26:03):

I find that super interesting. Because it's like, how can you begin to care about the quality of your data that you're writing? Until you know who cares about reading it.


Doron Porat (26:12):

Exactly. I also think that the concept of the shift left, which is getting bigger and bigger in the industry. Eventually you come to a conclusion that unless you shift left, there's no way to overcome these issues. The leftist, you shift it.


Kris Jenkins (26:35):

I'm going to make you define shift left as a movement.


Doron Porat (26:37):

Oh, I'm going to, yeah. Actually, I borrowed it from DataHub. I think they were the ones that used this term first I think. But I really like it. It actually means that you don't need or you can't fix all the data issues and data corruption discrepancies on the analyst side, on the consuming application side. The leftist is you fix this, which is on the producer side, on the service side, on the database side, the actual developer that now creates this new feature, which is going to create this new stream of data, has to be the one that is fully aware of the quality of the data that they're creating. If there are any data issues they should be solved and the source.


Doron Porat (27:22):

Then I think this is the true key for building a healthy system. Otherwise, more and more people are wasting more and more time trying to figure out and fix the same problems that other people are going to solve because you're not solving the problem close enough to the source. But this is ...


Kris Jenkins (27:36):

Push the solution upstream, right?


Doron Porat (27:38):

Yeah, exactly. Then you have many solutions that people have to apply. They look differently. You don't really solve the problem from the root cause. This is complicated. This is a big thing to say because it also brings much more responsibility to the application side where they don't necessarily care exactly about some data duplication that doesn't affect the application side. This can lead to another culture where you create a different clean set of data that you feed analytics with, which could be different than the data that is used for the application side.


Doron Porat (28:15):

But I think at the bottom of all of this, it's a matter of understanding how your data is being used and taking ownership over it and realizing this is an asset that you control and you produce.


Liran Yogev (28:28):

Yeah. It's really a part of process and culture more than its technology. The organization needs to want this and understand why it cannot work any other way.


Doron Porat (28:39):

Invest the time in it.


Liran Yogev (28:40):

Yeah. Both in tooling and both in all the different people that are actually producing data. They have to do a lot more work right now.


Kris Jenkins (28:47):

Because there's always the risk in a less healthy organization, that you go to the people writing the data and they say, "It's fine. I've been writing it that way for three years. It's absolutely fine. Why should I change?" What's their problem?


Doron Porat (29:00):

I won't call it risk, I'll call it reality.


Liran Yogev (29:04):

No, but I think unfortunately, and I think we see it in both organizations is it's a top-down thing that has to happen. The organization needs to realize that it cannot work that way. For example, in our organization, we're going to start this process called certification of a data set. It has to be certified. Again, we can use the data catalog and the different tooling and help the producers make that a quick and really easy process.


Liran Yogev (29:30):

But again, we now have a set of expectations from data consumers that they only consume certified data. That certified data has to uphold a bunch of different things. That's one way to solve this. We know there are many different ways, but again, it's a process way to solve this issue.


Doron Porat (29:46):

I think that where infrastructure comes in and what you just described is, how you create this processes and change things in the way that would not hurt the velocity of the organization drastically.


Liran Yogev (29:58):

Yep. That's our role, I think in the organization. Make it transparent.


Doron Porat (30:08):

Just making more work for ourselves. Yeah, it's just to make sure that we have something to do in the organization.


Liran Yogev (30:08):

By the way, it's not just velocity. I think one of the things that we actually love to do, enjoy myself, I'm really a fan of this is making this a fun process.


Doron Porat (30:14):

A developer happiness.


Liran Yogev (30:16):

Yes. Oh my god, I love it. No, but it's super important because ...


Doron Porat (30:19):

Did I make it up?


Liran Yogev (30:21):

It's yours. It's yours now. Developer happiness.


Kris Jenkins (30:25):

You tried picking it.


Liran Yogev (30:25):

Yeah, it's true. She's really looking forward to this.


Doron Porat (30:26):

Trying all the time.


Liran Yogev (30:30):

But I think this is a really important thing is when we ask these developers, "Oh, now you have to document a bunch of things that you weren't documenting before." Or there's no expectation from. There'll be like, "Oh, but it's so boring. I'm not even a consumer of this data. Why should I care?" You have to make this into a fun process that's very easily implemented. They have these different visualizations to see if actually this process actually worked. If they understand what's in it for them, that can really help with the process.


Kris Jenkins (31:01):

This reminds me of something like Swagger. The quality of people's documented rest APIs went up after Swagger became mainstream. It's like, "Okay, so it's no longer a horrible chore and a maintenance burden forever." Is that something you've done with the tool? Something like that, have you done with the tool you've been working on?


Liran Yogev (31:20):

I think we have different use cases. In Yotpo, what we did was utilize a tool called dbt to document all the different data sets, which is really nice. It has a lot of difference. There's a UI on top of it with a data catalog. It's really easy to write and test. We really invested and you can see our Current talk if you want, you can see a demo of what it can do.


Kris Jenkins (31:39):

Link in the show notes to your talk, you gave to Current.


Liran Yogev (31:41):

Yes, we'll do. That will be the Yotpo. A ZipRecruiter for example, we're doing the same but using protobuf format. That's actually very similar to Swagger. We're going to document all our data sets with protobuf and again, making it very easy to document things and making it very testable and CICD-oriented and everything. But again, protobuf is going to be the format to document all that.


Doron Porat (32:01):

Yeah. It's a concept of having an actual contract that both sides sign on and understand and respect.


Kris Jenkins (32:07):

Yeah. It's funny. Sometimes in this industry, we avoid saying, "This is static typing, isn't it?" These are strong types of your data. Yeah, it's the same idea.


Doron Porat (32:20):

Let's not open the subject.


Liran Yogev (32:21):

It's kind of worms.


Kris Jenkins (32:22):

Can of worms. [inaudible 00:32:24] types, let's move on.


Liran Yogev (32:25):

This is the thing during my career and Doron's as well. Are we starting? No, I've been moving back and forth on the topic of schema. Schema less, kind of typing. I think by the way, the world is going back into strong typing both for just programming and also for data. Because I think it's just really messy if you don't have that. If the organization is really big, then it's even too messy in a way that it can really cause production issues if you don't have that in place.


Liran Yogev (33:02):

Schemas are really important, but I'm going to add something to that. That's also important for me is that it's a question of where that control over the schema or the data types is.


Doron Porat (33:14):

I have an example. No, finish your sentence before I'll give an example.


Liran Yogev (33:19):

It's something I haven't finished talking about.


Doron Porat (33:20):

No, I know.


Liran Yogev (33:20):

No. In the infrastructure world, it depends on the organization. But at the Yotpo, we used to have all of the data pipelines go through us. It's not us. It's a virtual us. But we were the ones in charge of what's behind the scene running all the different ETLs and data pipelines. Whenever something was bad, it came back to us. If you have a streaming job and something is like this, for example, someone made a non-backward compatible schema change ...


Doron Porat (33:48):

That was the example.


Liran Yogev (33:49):

Oh, okay. Sorry.


Doron Porat (33:49):

No. Sorry.


Liran Yogev (33:54):

Okay. Then all it comes back to us. "Oh, so fix it for us. It's your problem. You are the infrastructure. I made a mistake. Deal with it." I think this is where we want things to be different. That schema enforcement or backward compatibility and everything, needs to be at a level where everything that's below that level can still be schemaless, fluid and free and not having all these operations issue. It's still on the producer side or the shift left, whatever Doron said before, to make sure that they have their contract. But it's not on the infrastructure side.


Doron Porat (34:31):

Yeah, I think ...


Kris Jenkins (34:31):

You don't want to be an infrastructure service. You need to be a platform.


Liran Yogev (34:34):

Yes.


Doron Porat (34:35):

Exactly. I think exactly what Liran said, just to follow this through, what we did back then, we had a certain issue where we were streaming CDC data and for our purposes to feed live tables in the data lake. But suddenly, "Whoa, this is super interesting data. All these consumers said, "I want this too. I want in." Then it started consuming from the same topic. Then we started getting heat from the compatibility issues because changes were being made to the schema and us, as a CDC consumer into the data lake did not really care about it.


Doron Porat (35:07):

So we said, "Okay, no compatibility. We don't care about this." Then we started breaking consumers. Then the architecture that we came up with is actually separating into different Kafka topics and using MirrorMaker to replicate the topics onto the consumers, brokers for them to manage their own schema compatibility levels, which solve the problems. You can manage whatever schema compatibility that you like. I think this is a really good example of how to distribute the responsibility for whatever changes happen.


Liran Yogev (35:40):

Can I cut? A split ownership model. I have another example, but I don't think we have time. I don't know.


Kris Jenkins (35:44):

We've got time. I love an example. I love making it concrete.


Liran Yogev (35:50):

Okay. I actually have an interesting example. That's actually from a production side, not from an classic data lake side. We have the Elasticsearch for our CDP. We build our CDP, our customer data platform at Yotpo. We were like think about a centralized place where all of the events from the organization are flowing in.


Liran Yogev (36:09):

Then you can do aggregation and analysis on top of it. You can cluster different customers and send them, for example, this was mainly used for messaging. You want to find all the customers that have a specific set of properties. The database at this point was Elasticsearch. It was getting fed from many, many, many different data producers, the data.


Liran Yogev (36:32):

We have producers that were creating data on their side, they're writing their contract, for example there. Then just feeding it to some kind of a very complicated Kafka-based system that was feeding Elasticsearch. Elasticsearch was actually a very problematic infrastructure at that point because it is not a schemaless tool by design. It looks schemaless, but it's not. It still has a schema behind the scenes.


Liran Yogev (36:56):

You cannot change a type for a field after the fact. You're going to have to create a new index. That's just an example of something that I actually was interrupting. Then they were like, "Oh, okay, I have control over my schema. It's okay. I want to delete it. I want to create a new one, or I want to change something that's not compatible or even want to push something that's even from third parties, which I don't even have control over its schema." There's like a schemaless world happening.


Doron Porat (37:24):

Now, that's an example for our third party producer.


Liran Yogev (37:24):

Oh, yeah. We wanted our customers to send their own custom events to the CDP and then Elasticsearch, let's think about it as a part of an infrastructure piece of the puzzle was not even allowing that. That was where I think, I don't know if they're changing into a different engine behind this.


Doron Porat (37:38):

No, because you left.


Liran Yogev (37:40):

I left. I don't know what's going on right now. But again, this was an example of how an infrastructure can really break that model. You want the schema, but you wanted it at the consumption level. Elasticsearch was also in the producing level. We're really affecting the entire way the system can work and limiting it. That's like an example. Yeah.


Kris Jenkins (38:00):

Oh God, you untie knots like that though.


Doron Porat (38:04):

Leaving.


Liran Yogev (38:05):

Yeah. This was too much for me. Is this too much?


Kris Jenkins (38:07):

I'm getting a sense that this is a slightly sore point that he's abandoned you.


Doron Porat (38:11):

No, I just miss him. I just miss him.


Liran Yogev (38:18):

Yeah. I think for example, in this case, we were looking at technologies to allow us to feed schema's information or change the entire data model. It's always like that. You always have to think about, "Maybe I've done this entire data design wrong. I have to change it." In this example, maybe we should have split into different indexes in elastic based on the event type and just do a dynamic creation of indexes based on the schemas. Or maybe we should have used a very generic schema and everything is a string.


Liran Yogev (38:45):

Then the consumption layer, you're going to do something called schema on read. You can use that feature or use a technology like Rockset, which allows you to feed schema's information and then do schema on read, whatever you want in a very streaming way. It's a very cool technology. There's a lot of different solutions back then. But again, it just shows how it's important to have an infrastructure that it's not really opinionated on how to actually consume the data.


Doron Porat (39:12):

Yeah. I think a lot of the times, it's a journey. A lot of the things we couldn't have known at the startup point, how things are going to grow and what are going to be the needs and the requirements. This is why we have to keep agility, flexibility and also in mind and actually work plans to fit the solution wherever it goes.


Liran Yogev (39:34):

Yeah. If you make a decision like this, this can be a really, really expensive decision that you make. If the system is like we're building it right now and what we've worked on together at Yotpo is you have this decoupled system, then the decision is not as expensive as it can be. You have that decoupling in place. You can always switch to different thing. There's always going to be a price for every change, right? It's never fair. But how costly it will be? That's up to if you did create this really good infrastructure ...


Kris Jenkins (40:07):

It sounds like you've been through a bit of an evolution in the wars and you've got some ideas. But I know that you are trying to turn this into a system and open-source it. Tell me a little bit about your open-source plans so other people can benefit from this.


Liran Yogev (40:24):

Interesting, interesting. I no longer work for Yotpo so, do we have open-source plans?


Kris Jenkins (40:27):

That's the rumor I hear.


Doron Porat (40:30):

We have plans. That's a rumor we're trying to spread. At the moment, we're really focused on winning with this new platform and making this. Working on feature parity with Metorikku in what we have today in order to replace it substantially. That's what we were super focused on now, but the feedback, what we are getting through the demo and we talked to people and we showed it in a few meetups and shared it with people, people are excited. What we do in Yoda, it really gives a more holistic view on what dbt we're trying to do, which we really liked and really loved it.


Liran Yogev (41:19):

I want to reiterate. Yoda is a project that we started off with dbt. dbt, it's called Data Build Tool. It's a data modeling infrastructure open-source by a company called dbt Labs. They also have a managed solution. It's a really defacto solution for data modeling today. It supports so many, many different engines. It was really fitting to our use case.


Liran Yogev (41:41):

With Yoda, what we did was, okay, so dbt is great and a very basic part of what we need, but doesn't solve everything. It's not really about the execution layer. It doesn't really good do well for orchestration. You can see our talk about this. With Yoda, we're trying to solve this from one side, from the developer experience side. Again, as Doron said, we want to win, we want people to move to Yoda. We created this really nice way of interacting with it, which is better than what dbt proposes as its main interface.


Liran Yogev (42:20):

Then the second part is wrap it up with CICD and orchestration layer. That helps do very complicated things automatically behind the scenes. This is what Yoda does.


Kris Jenkins (42:33):

Does it help at all with the whole governance and lineage angle?


Liran Yogev (42:36):

Oh, yeah.


Doron Porat (42:37):

Yeah. But lineage is basically something you get out of the box from dbt because everything works with inherent references between different data models. That's really nice. By the way, it's a different kind of lineage because it's not created via run time. It's just created from the actual code. Then the SQL that you are creating.


Doron Porat (43:00):

But that that's also very good solid lineage that you have. That part is done. I think the part of the data documentation that we really, really care about, we added our own automations in order to make this easier for the developers. Because it's really tedious to type in all these YAML files and all these descriptions. We wanted to make this easier for them. As Liran said before, fun to encourage them to do so.


Doron Porat (43:28):

Because we talked about it a lot. But I think that during the process of developing a new data pipeline, you're all into it. You know what it means, where it comes from, who to talk to and when time goes by, and especially when you leave, there's no one else to tell the story. There's this pipeline running there and no one really knows how to attend it and where it comes from, what's used for. This is a very critical point when a pipeline is being developed and created to capture all this metadata.


Liran Yogev (44:00):

It's just not letting that all go to waste.


Doron Porat (44:03):

You're just collecting it in a smart way.


Liran Yogev (44:06):

They already have that information in their head. They just need to put it somewhere.


Doron Porat (44:11):

By the way, like dbt gave us out of the box, this beautiful documentation side where you can see all the lineage and documentation, which is awesome. Definitely this is where we're starting with. But it goes without saying that we need something bigger like a data catalog solution, for example, ZipRecruiter are using DataHub. We need something on top of that to serve the whole system and bring us end-to-end to map out the whole data platform from sources to the different applications consuming it. That's the full vision of where we're going.


Liran Yogev (44:46):

If we were talking about open-sourcing it, we would open-source probably ...


Doron Porat (44:48):

We would open-source.


Liran Yogev (44:51):

They would open-source.


Doron Porat (44:51):

You can be a contributor.


Liran Yogev (44:51):

Yeah. No. Okay. Let's not talk about that.


Kris Jenkins (44:57):

You signed your way right when you left. I'm sorry.


Liran Yogev (44:59):

No, I'm building something much better here. We're going to call it Zoda. No, not really.


Doron Porat (45:04):

Yoda's Yotpo ....


Kris Jenkins (45:06):

Did you need a divorce lawyer or something?


Liran Yogev (45:07):

Yeah, we did.


Doron Porat (45:09):

Yes.


Liran Yogev (45:13):

Anyways, there's a piece where it's the interactive CLI tool that's really useful that Doron just described, where it's auto creating a lot of the different documentation piece. Again, we're using a lot of the data that's already out there. I already acknowledge that you have it in your head. It's knowledge that it's already out there in the catalog or in the meta store. As we collect that, we can create that automatically and generate that for you. This is a lot of auto generating pieces.


Liran Yogev (45:39):

Then I think the second piece, they'll probably open-source is the orchestration level, which is the way to automatically map how to run and when to run the different data sets. All we ask, for example, from our developers from there, it's so confusing from developers in general, "Is tell us how fresh do you need this data set to be?" Is it going to be a streaming dataset? Is it going to be something you're doing to be once a day, once an hour? What are your requirements?


Liran Yogev (46:13):

Then behind the scenes, since we have the lineage and we have this knowledge, we can auto-generate the entire orchestration. We can create Airflow DAGs. We can create streaming pipelines. We can do whatever we want. All you have to do is just tell us what you need. We'll take care of that. Again, you see that there's a really nice decoupling of the execution and the actual business logic.


Doron Porat (46:33):

Yeah. It's the actual abstraction. If we want to create this workflow management, this workflow for this specific pipeline or pipelines, then the developer doesn't necessarily need to know what is happening behind the scenes. It's another thing we can spare from them, which now they need to know how to build an Airflow DAG, but it's completely useless.


Liran Yogev (46:55):

It's useless information. Yeah. Why would they need to know that? No, it's boring in a way that it doesn't really help them with ...


Doron Porat (47:03):

Yes, programmatically, very not exciting. It's another trademark.


Kris Jenkins (47:10):

Liran, would you like to commit Doron to a release date for that?


Liran Yogev (47:15):

Yes. They were planning to release it in the beginning of January 2023 for the new year.


Kris Jenkins (47:22):

Excellent. Amazing Christmas.


Liran Yogev (47:24):

I hope they'll invite me to the party. But I don't think they will.


Doron Porat (47:28):

2023, you said.


Liran Yogev (47:31):

Yeah, that's in one and a half month.


Doron Porat (47:39):

Okay.


Kris Jenkins (47:39):

Yeah, take your time.


Doron Porat (47:39):

Don't hold your breath. But it's coming.


Kris Jenkins (47:39):

Depending on when we broadcast this podcast. You may have already missed that release date.


Liran Yogev (47:45):

No, it's a really cool project. I'm super proud of the work that we did together. They're doing it right now. Really, I'm rooting for the win because it's a win for all the people that actually do platform engineering. Because this is the ultimate data platform engineering effort. I would really love to see it win.


Doron Porat (48:04):

Yeah. Another thing that we basically invested a lot in is that basically, dbt is really built for data warehouses. It's really custom for it. We wanted to open it to something that's much more flexible and different and to fit it into our data lake and open data platform needs. I think this can address a lot of people in the industry. The use case is something that's very common.


Kris Jenkins (48:35):

Yeah. If I could put a cap on this, I'm going to tell you what I think I'm going to take away from this and you can tell me what I've missed for the conclusion. The overarching theme here is find a way to get that feedback loop from your consumers to your producers. Give the producers ability to see the larger picture without distracting them with lower level details, like how systems will be executed. You've got to get them to focus on the data pipeline from a 10,000-foot view. The only way you're going to get that shift is you have to have management buy in from the top.


Kris Jenkins (49:12):

Is that a fair summary of the picture, a three-minute sketch?


Liran Yogev (49:21):

Yeah, it is. I think if you see our talk, it's a story about creating data platforms and the different decision making that you have along the way and how they can really affect the culture for the organization. These two things you just said are completely right. We didn't have our management support or the entire organization support in the beginning to all of the different things we wanted. Therefore, we had to work from the bottom up.


Liran Yogev (49:48):

We had to work from our engineers. We created Metorikku which is very successful. But again, wasn't talking to analytics or wasn't talking to different data consumers. We were really focused on the engineering side. That part is really right.


Liran Yogev (50:01):

I don't remember the other.


Kris Jenkins (50:02):

This feedback loop from how it's being used to how it's being produced.


Liran Yogev (50:12):

I think that's the part that we're both relearning right now as we go into this and in both of our companies is the consumption of data is super, super important. Just putting data out there is not enough. You have to have more information about it. That feedback loop is really important. You have to talk to consumers to understand what they need and to really, really understand what are the expectations from data.


Liran Yogev (50:35):

Even if we, let's say we did create some really good data set and it takes about 20 minutes to query, is that a good data set? I don't know. I don't think so. It's not about just the metadata. It's also about how people are actually going to use it. What kind of questions they're going to ask.


Liran Yogev (50:51):

If you are not thinking about it or you're thinking only about yourselves and your team, then you're missing the point. Then they're going to have to create these duplicated data pipelines to summarize your data or to create some shadow data pipelines on top of your data to make it smaller, whatever.


Kris Jenkins (51:10):

I think ... Oh, sorry, go on.


Doron Porat (51:12):

I just want to say that I think it's also a story about maturity and how we matured as the data group and how Yotpo matures and ZipRecruiter matures in terms of how we treat our data. We also talked about it at our talk, that we talk a lot about governance and governance tooling. That's a big topic in data infrastructure, but we don't talk enough about our role as data governors. I think that this piece of infrastructure really had a fancy English word, which I forgot, but I'm going to say it's in a silly English word. But I think that this is ...


Kris Jenkins (51:48):

We have so many of those.


Doron Porat (51:50):

No, I had a really nice one in my head, but I think that this piece of infrastructure really shows what we believe in and what's important for us and our job as educators for our organization and how we can push this agenda through this.


Liran Yogev (52:12):

That was your big word.


Doron Porat (52:18):

No, it was a much fancy word. I'll write it down for you later. Agenda's not a big word, I know. Shut up.


Kris Jenkins (52:21):

Email it through. We'll edit the show note. Just randomly.


Liran Yogev (52:28):

No, just to put a text to ...


Doron Porat (52:29):

Just shut up.


Kris Jenkins (52:31):

Okay. I think it's a shame you two are no longer working directly together. You should find a way to fix that.


Liran Yogev (52:38):

Yeah, we know.


Kris Jenkins (52:39):

But in the meantime, thank you so much for joining us. Liran and Doron.


Doron Porat (52:42):

Thank you.


Kris Jenkins (52:42):

It's been a pleasure.


Liran Yogev (52:42):

Thank you.


Doron Porat (52:42):

Bye-bye.


Kris Jenkins (52:45):

See you guys.


Kris Jenkins (52:45):

Thank you Doron and Liran. This is a bit of a tangent, but I'm going to go there. I have always wished that the Agile Manifesto explicitly mentioned feedback loops. Feedback loops are in there. If you read between the lines, they're there. But it'll be nice if it was explicit. Because so often we return to this idea. A large part of the job of being a programmer is to get faster and better feedback loops, to iterate better, to give people more of what they need quicker.


Kris Jenkins (53:16):

Sometimes we do that by leaving the desk and actually talking to different departments and talking to users and understanding them better. But sometimes it's about building tools that we can use to skip over that conversation and get everybody the answers they need fast. You need both. My point is, you might think that a platform team could just hide in the server room and keep things running. But to be good at it, they actually need to be involved in feedback loops. They need to be enabling feedback loops for others. That makes a really great platform.


Kris Jenkins (53:55):

Speaking of feedback, as we were, we love feedback here at Streaming Audio. Please do get in touch. There are like buttons to click, comment boxes, review boxes and share buttons if you like that kind of feedback. Or if you want to get in touch with me directly, you can find me on Twitter if it's still running there. At the time I'm recording this, it seems a bit uncertain how much longer Twitter will last. But assuming it's there, I'll be there too. Come and find me.


Kris Jenkins (54:25):

If you are ever in London, I've recently started running some monthly hack nights in London, so look us up and you can join us for a bit of programming. With that said, before we go, Streaming Audio is brought to you by Confluent Developer, which is our free site that teaches you everything we know about Apache Kafka and realtime event systems in general.


Kris Jenkins (54:46):

Check it out at developer.confluent.io if you want to get started with Kafka or if you want to level up your Kafka skills. To do that, you'll probably going to need a Kafka cluster. The easiest way to spin up one of those is to go to Confluent Cloud and use our Cloud Kafka service. That's our platform. We're quite proud of it.


Kris Jenkins (55:08):

You can sign up in minutes. You can have an Apache Kafka cluster running in no time. If you add the code PODCAST100 to your account, you'll get $100 of extra free credit to run with. With that, it remains for me to thank Doron Porat and Liran Yogev for joining us and you for listening. I've been your host, Kris Jenkins. I will catch you next time.