Data Engineering Podcast podcast

Data Engineering Podcast

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

 

#449

The Art of Database Selection and Evolution

SummaryIn this episode of the Data Engineering Podcast Sam Kleinman talks about the pivotal role of databases in software engineering. Sam shares his journey into the world of data and discusses the complexities of database selection, highlighting the trade-offs between different database architectures and how these choices affect system design, query performance, and the need for ETL processes. He emphasizes the importance of understanding specific requirements to choose the right database engine and warns against over-engineering solutions that can lead to increased complexity. Sam also touches on the tendency of engineers to move logic to the application layer due to skepticism about database longevity and advises teams to leverage database capabilities instead. Finally, he identifies a significant gap in data management tooling: the lack of easy-to-use testing tools for database interactions, highlighting the need for better testing paradigms to ensure reliability and reduce bugs in data-driven applications.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today to learn how Datafold can automate your migration and ensure source to target parity. ---Your host is Tobias Macey and today I'm interviewing Sam Kleinman about database tradeoffs across operating environments and axes of scale Interview ---Introduction ---How did you get involved in the area of data management? ---The database engine you use has a substantial impact on how you architect your overall system. When starting a greenfield project, what do you see as the most important factor to consider when selecting a database? ---points of friction introduced by database capabilities ---embedded databases (e.g. SQLite, DuckDB, LanceDB), when to use and when do they become a bottleneck ---single-node database engines (e.g. Postgres, MySQL), when are they legitimately a problem ---distributed databases (e.g. CockroachDB, PlanetScale, MongoDB) ---polyglot storage vs. general-purpose/multimodal databases ---federated queries, benefits and limitations ------ease of integration vs. variability of performance and access control Contact Info --- [LinkedIn] (https://www.linkedin.com/in/samkleinman/) --- [GitHub] (https://github.com/tychoish) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [MongoDB] (https://www.mongodb.com/) --- [Neon] (https://neon.tech/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/neon-serverless-postgres-episode-433) --- [GlareDB] (https://glaredb.com/) --- [NoSQL] (https://en.wikipedia.org/wiki/NoSQL) --- [S3 Conditional Write] (https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/) --- [Event driven architecture] (https://en.wikipedia.org/wiki/Event-driven_architecture) --- [CockroachDB] (https://www.cockroachlabs.com/) --- [Couchbase] (https://www.couchbase.com/) --- [Cassandra] (https://cassandra.apache.org/_/index.html) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

01 Dec 2024

59 MINS

59:56

01 Dec 2024


#448

Bridging Code and UI in Data Orchestration with Kestra

SummaryIn this episode of the Data Engineering Podcast, Anna Geller talks about the integration of code and UI-driven interfaces for data orchestration. Anna defines data orchestration as automating the coordination of workflow nodes that interact with data across various business functions, discussing how it goes beyond ETL and analytics to enable real-time data processing across different internal systems. She explores the challenges of using existing scheduling tools for data-specific workflows, highlighting limitations and anti-patterns, and discusses Kestra's solution, a low-code orchestration platform that combines code-driven flexibility with UI-driven simplicity. Anna delves into Kestra's architectural design, API-first approach, and pluggable infrastructure, and shares insights on balancing UI and code-driven workflows, the challenges of open-core business models, and innovative user applications of Kestra's platform.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today for the details. ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to Data Citizens Dialogues on Apple, Spotify, Youtube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Anna Geller about incorporating both code and UI driven interfaces for data orchestration Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by sharing a definition of what constitutes "data orchestration"? ---There are many orchestration and scheduling systems that exist in other contexts (e.g. CI/CD systems, Kubernetes, etc.). Those are often adapted to data workflows because they already exist in the organizational context. What are the anti-patterns and limitations that approach introduces in data workflows? ------What are the problems that exist in the opposite direction of using data orchestrators for CI/CD, etc.? ---Data orchestrators have been around for decades, with many different generations and opinions about how and by whom they are used. What do you see as the main motivation for UI vs. code-driven workflows? ---What are the benefits of combining code-driven and UI-driven capabilities in a single orchestrator? ------What constraints does it necessitate to allow for interoperability between those modalities? ---Data Orchestrators need to integrate with many external systems. How does Kestra approach building integrations and ensure governance for all their underlying configurations? ---Managing workflows at scale across teams can be challenging in terms of providing structure and visibility of dependencies across workflows and teams. What features does Kestra offer so that all pipelines and teams stay organised? ---What are the most interesting, innovative, or unexpected ways that you have seen Kestra used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Kestra? ---When is Kestra the wrong choice? ---What do you have planned for the future of Kestra? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/anna-geller-12a86811a/) --- [Blog] (https://annageller.medium.com/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Kestra] (https://kestra.io/) --- [CI/CD] (https://en.wikipedia.org/wiki/CI/CD) --- [State Machine] (https://en.wikipedia.org/wiki/Finite-state_machine) --- [AWS Lambda] (https://aws.amazon.com/lambda/) --- [GitHub Actions] (https://github.com/features/actions) --- [ECS Fargate] (https://aws.amazon.com/fargate/) --- [Airflow] (https://airflow.apache.org/) --- [Kafka] (https://kafka.apache.org/) --- [Elasticsearch] (https://www.elastic.co/) --- [Airflow XCom] (https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/xcoms.html) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) In this episode of the Data Engineering Podcast, host Tobias Macy interviews Anna Geller, a data engineer turned product manager, about the integration of code and UI-driven interfaces for data orchestration. Anna shares her journey from working with data during an internship at KPMG to her current role as a product lead at Kestra. She provides her insights into the concept of data orchestration, emphasizing its broader scope beyond just ETL and analytics, and discusses the challenges and anti-patterns that arise when using existing scheduling systems for data-specific workflows.Anna explains the overlap between CI/CD, scheduling, and orchestration tools, and the limitations that occur when these tools are used for data workflows. She highlights the importance of visibility and governance at scale and the need for a dedicated orchestrator like Kestra. The conversation also delves into the challenges of using data orchestrators for non-data workflows and the benefits of combining code and UI-driven approaches.Anna discusses Kestra's architecture, which supports both JDBC and Kafka backends, and its focus on API-first interactions. She explains how Kestra handles task granularity, inputs, and outputs, and the flexibility provided by its plugin system. The episode also explores Kestra's approach to data as assets, the target audience for Kestra, and how it bridges different workflows across organizational boundaries.The discussion touches on Kestra's open-core model, the challenges of balancing open-source and enterprise features, and the innovative ways Kestra is being applied. Anna shares insights into Kestra's local development experience, the lessons learned in building the product, and the upcoming features and projects that Kestra is excited to explore. ... Read more

26 Nov 2024

44 MINS

44:30

26 Nov 2024


#447

Streaming Data Into The Lakehouse With Iceberg And Trino At Going

In this episode, I had the pleasure of speaking with Ken Pickering, VP of Engineering at Going, about the intricacies of streaming data into a Trino and Iceberg lakehouse. Ken shared his journey from product engineering to becoming deeply involved in data-centric roles, highlighting his experiences in ecommerce and InsurTech. At Going, Ken leads the data platform team, focusing on finding travel deals for consumers, a task that involves handling massive volumes of flight data and event stream information.Ken explained the dual approach of passive and active search strategies used by Going to manage the vast data landscape. Passive search involves aggregating data from global distribution systems, while active search is more transactional, querying specific flight prices. This approach helps Going sift through approximately 50 petabytes of data annually to identify the best travel deals.We delved into the technical architecture supporting these operations, including the use of Confluent for data streaming, Starburst Galaxy for transformation, and Databricks for modeling. Ken emphasized the importance of an open lakehouse architecture, which allows for flexibility and scalability as the business grows.Ken also discussed the composition of Going's engineering and data teams, highlighting the collaborative nature of their work and the reliance on vendor tooling to streamline operations. He shared insights into the challenges and strategies of managing data life cycles, ensuring data quality, and maintaining uptime for consumer-facing applications.Throughout our conversation, Ken provided a glimpse into the future of Going's data architecture, including potential expansions into other travel modes and the integration of large language models for enhanced customer interaction. This episode offers a comprehensive look at the complexities and innovations in building a data-driven travel advisory service. ... Read more

18 Nov 2024

39 MINS

39:49

18 Nov 2024


#446

An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin

SummaryThe challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Bruin is and the story behind it? ------Who is your target audience? ---There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users? ---How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows? ------How might it act as a limiting factor for organizational involvement? ---Can you describe how Bruin is designed? ------How have the design and scope of Bruin evolved since you first started working on it? ---You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality? ------What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows? ---What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems? ---Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr? ---What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities? ---What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin? ---When is Bruin the wrong choice? ---What do you have planned for the future of Bruin? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/burakkarakan/?originalSubdomain=de) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Bruin] (https://getbruin.com/) --- [Fivetran] (https://www.fivetran.com/) --- [Stitch] (https://www.stitchdata.com/) --- [Ingestr] (https://github.com/bruin-data/ingestr) --- [Bruin CLI] (https://github.com/bruin-data/bruin) --- [Meltano] (https://meltano.com/) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [dbt] (https://www.getdbt.com/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [SDF] (https://www.sdf.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sdf-fast-and-expressive-sql-transformation-episode-440) --- [Airflow] (https://airflow.apache.org/) --- [Dagster] (https://dagster.io/) --- [Snowpark] (https://www.snowflake.com/en/data-cloud/snowpark/) --- [Atlan] (https://atlan.com/) --- [Evidence] (https://evidence.dev/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

11 Nov 2024

56 MINS

56:11

11 Nov 2024


#445

Feldera: Bridging Batch and Streaming with Incremental Computation

SummaryIn this episode of the Data Engineering Podcast, the creators of Feldera talk about their incremental compute engine designed for continuous computation of data, machine learning, and AI workloads. The discussion covers the concept of incremental computation, the origins of Feldera, and its unique ability to handle both streaming and batch data seamlessly. The guests explore Feldera's architecture, applications in real-time machine learning and AI, and challenges in educating users about incremental computation. They also discuss the balance between open-source and enterprise offerings, and the broader implications of incremental computation for the future of data management, predicting a shift towards unified systems that handle both batch and streaming data efficiently.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to [Data Citizens Dialogues] (https://www.collibra.com/podcasts) on Apple, Spotify, Youtube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Leonid Ryzhyk, Lalith Suresh, and Mihai Budiu about Feldera, an incremental compute engine for continous computation of data, ML, and AI workloads Interview ---Introduction ---Can you describe what Feldera is and the story behind it? ---DBSP (the theory behind Feldera) has won multiple awards from the database research community. Can you explain what it is and how it solves the incremental computation problem? ---Depending on which angle you look at it, Feldera has attributes of data warehouses, federated query engines, and stream processors. What are the unique use cases that Feldera is designed to address? ------In what situations would you replace another technology with Feldera? ------When is it an additive technology? ---Can you describe the architecture of Feldera? ------How have the design and scope evolved since you first started working on it? ---What are the state storage interfaces available in Feldera? ------What are the opportunities for integrating with or building on top of open table formats like Iceberg, Lance, Hudi, etc.? ---Can you describe a typical workflow for an engineer building with Feldera? ---You advertise Feldera's utility in ML and AI use cases in addition to data management. What are the features that make it conducive to those applications? ---What is your philosophy toward the community growth and engagement with the open source aspects of Feldera and how you're balancing that with sustainability of the project and business? ---What are the most interesting, innovative, or unexpected ways that you have seen Feldera used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Feldera? ---When is Feldera the wrong choice? ---What do you have planned for the future of Feldera? Contact Info ---Leonid ------ [Website] (https://ryzhyk.net/) ------ [GitHub] (https://github.com/ryzhyk) ------ [LinkedIn] (https://www.linkedin.com/in/leonid-ryzhyk-0ba031b9/) ---Lalith ------ [LinkedIn] (https://www.linkedin.com/in/lalith-suresh-34bb8911/) ------ [Website] (https://lalith.in/research/) ---Mihai ------ [Website] (https://mihaibudiu.github.io/work/index.html) ------ [GitHub] (https://github.com/mihaibudiu) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Feldera] (https://www.feldera.com/) ------ [GitHub] (https://github.com/feldera/feldera) --- [DBSP] (https://arxiv.org/abs/2203.16684) paper ------ [Rust Crate] (https://docs.rs/dbsp/latest/dbsp/) --- [Differential Dataflow] (https://timelydataflow.github.io/differential-dataflow/) --- [Trino] (https://trino.io/) --- [Flink] (https://flink.apache.org/) --- [Spark] (https://spark.apache.org/) --- [Materialize] (https://materialize.com/) --- [Clickhouse] (https://clickhouse.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [Snowflake] (https://www.snowflake.com) --- [Arrow] (https://arrow.apache.org/) --- [Substrait] (https://substrait.io/) --- [DataFusion] (https://datafusion.apache.org/) --- [DSP == Digital Signal Processing] (https://en.wikipedia.org/wiki/Digital_signal_processing) --- [CDC == Change Data Capture] (https://en.wikipedia.org/wiki/Change_data_capture) --- [PRQL] (https://prql-lang.org/) --- [LSM (Log-Structured Merge) Tree] (https://en.wikipedia.org/wiki/Log-structured_merge-tree) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [Open VSwitch] (https://www.openvswitch.org/) --- [Feature Engineering] (https://en.wikipedia.org/wiki/Feature_engineering) --- [Calcite] (https://calcite.apache.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

04 Nov 2024

47 MINS

47:36

04 Nov 2024


#444

Accelerate Migration Of Your Data Warehouse with Datafold's AI Powered Migration Agent

SummaryGleb Mezhanskiy, CEO and co-founder of DataFold, joins Tobias Macey to discuss the challenges and innovations in data migrations. Gleb shares his experiences building and scaling data platforms at companies like Autodesk and Lyft, and how these experiences inspired the creation of DataFold to address data quality issues across teams. He outlines the complexities of data migrations, including common pitfalls such as technical debt and the importance of achieving parity between old and new systems. Gleb also discusses DataFold's innovative use of AI and large language models (LLMs) to automate translation and reconciliation processes in data migrations, reducing time and effort required for migrations.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm welcoming back Gleb Mezhanskiy to talk about Datafold's experience bringing AI to bear on the problem of migrating your data stack Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what the Data Migration Agent is and the story behind it? ------What is the core problem that you are targeting with the agent? ---What are the biggest time sinks in the process of database and tooling migration that teams run into? ---Can you describe the architecture of your agent? ------What was your selection and evaluation process for the LLM that you are using? ---What were some of the main unknowns that you had to discover going into the project? ------What are some of the evolutions in the ecosystem that occurred either during the development process or since your initial launch that have caused you to second-guess elements of the design? ---In terms of SQL translation there are libraries such as SQLGlot and the work being done with SDF that aim to address that through AST parsing and subsequent dialect generation. What are the ways that approach is insufficient in the context of a platform migration? ---How does the approach you are taking with the combination of data-diffing and automated translation help build confidence in the migration target? ---What are the most interesting, innovative, or unexpected ways that you have seen the Data Migration Agent used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI powered migration assistant? ---When is the data migration agent the wrong choice? ---What do you have planned for the future of applications of AI at Datafold? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/glebmezh/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [Datafold] (https://www.datafold.com/) --- [Datafold Migration Agent] (https://www.datafold.com/data-migration) --- [Datafold data-diff] (https://www.datafold.com/data-diff) --- [Datafold Reconciliation Podcast Episode] (https://www.dataengineeringpodcast.com/datafold-database-reconciliation-episode-417) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [Lark] (https://github.com/lark-parser/lark) parser --- [Claude 3.5 Sonnet] (https://www.anthropic.com/news/claude-3-5-sonnet) --- [Looker] (https://cloud.google.com/looker/?hl=en) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/looker-with-daniel-mintz-episode-55) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

27 Oct 2024

48 MINS

48:50

27 Oct 2024


#443

Bring Vector Search And Storage To The Data Lake With Lance

SummaryThe rapid growth of generative AI applications has prompted a surge of investment in vector databases. While there are numerous engines available now, Lance is designed to integrate with data lake and lakehouse architectures. In this episode Weston Pace explains the inner workings of the Lance format for table definitions and file storage, and the optimizations that they have made to allow for fast random access and efficient schema evolution. In addition to integrating well with data lakes, Lance is also a first-class participant in the Arrow ecosystem, making it easy to use with your existing ML and AI toolchains. This is a fascinating conversation about a technology that is focused on expanding the range of options for working with vector data.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Weston Pace about the Lance file and table format for column-oriented vector storage Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Lance is and the story behind it? ------What are the core problems that Lance is designed to solve? ---------What is explicitly out of scope? ---The README mentions that it is straightforward to convert to Lance from Parquet. What is the motivation for this compatibility/conversion support? ------What formats does Lance replace or obviate? ---In terms of data modeling Lance obviously adds a vector type, what are the features and constraints that engineers should be aware of when modeling their embeddings or arbitrary vectors? ------Are there any practical or hard limitations on vector dimensionality? ---When generating Lance files/datasets, what are some considerations to be aware of for balancing file/chunk sizes for I/O efficiency and random access in cloud storage? ---I noticed that the file specification has space for feature flags. How has that aided in enabling experimentation in new capabilities and optimizations? ---What are some of the engineering and design decisions that were most challenging and/or had the biggest impact on the performance and utility of Lance? ---The most obvious interface for reading and writing Lance files is through LanceDB. Can you describe the use cases that it focuses on and its notable features? ------What are the other main integrations for Lance? ------What are the opportunities or roadblocks in adding support for Lance and vector storage/indexes in e.g. Iceberg or Delta to enable its use in data lake environments? ---What are the most interesting, innovative, or unexpected ways that you have seen Lance used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on the Lance format? ---When is Lance the wrong choice? ---What do you have planned for the future of Lance? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/weston-pace-cool-dude/) --- [GitHub] (https://github.com/westonpace) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Links --- [Lance Format] (https://lancedb.github.io/lance/) --- [LanceDB] (https://lancedb.github.io/lancedb/) --- [Substrait] (https://substrait.io/) --- [PyArrow] (https://arrow.apache.org/docs/python/index.html) --- [FAISS] (https://github.com/facebookresearch/faiss) --- [Pinecone] (https://www.pinecone.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/pinecone-vector-database-similarity-search-episode-189/) --- [Parquet] (https://parquet.apache.org/) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [PyLance] (https://github.com/lancedb/lance/tree/main/python) --- [Hilbert Curves] (https://en.wikipedia.org/wiki/Hilbert_curve) --- [SIFT Vectors] (https://en.wikipedia.org/wiki/Scale-invariant_feature_transform) --- [S3 Express] (https://aws.amazon.com/s3/storage-classes/express-one-zone/) --- [Weka] (https://www.weka.io/) --- [DataFusion] (https://datafusion.apache.org/) --- [Ray Data] (https://www.ray.io/) --- [Torch Data Loader] (https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#preparing-your-data-for-training-with-dataloaders) --- [HNSW == Hierarchical Navigable Small Worlds] (https://lancedb.github.io/lancedb/concepts/index_hnsw/) vector index --- [IVFPQ] (https://lancedb.github.io/lancedb/concepts/index_ivfpq/) vector index --- [GeoJSON] (https://geojson.org/) --- [Polars] (https://docs.pola.rs/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

20 Oct 2024

58 MINS

58:01

20 Oct 2024


#442

The Role of Python in Shaping the Future of Data Platforms with DLT

SummaryIn this episode of the Data Engineering Podcast, Adrian Broderieux and Marcin Rudolph, co-founders of DLT Hub, delve into the principles guiding DLT's development, emphasizing its role as a library rather than a platform, and its integration with lakehouse architectures and AI application frameworks. The episode explores the impact of the Python ecosystem's growth on DLT, highlighting integrations with high-performance libraries and the benefits of Arrow and DuckDB. The episode concludes with a discussion on the future of DLT, including plans for a portable data lake and the importance of interoperability in data management tools.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Adrian Brudaru and Marcin Rudolf, cofounders at dltHub, about the growth of dlt and the numerous ways that you can use it to address the complexities of data integration Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what dlt is and how it has evolved since we last spoke (September 2023)? ------What are the core principles that guide your work on dlt and dlthub? ---You have taken a very opinionated stance against managed extract/load services. What are the shortcomings of those platforms, and when would you argue in their favor? ---The landscape of data movement has undergone some interesting changes over the past year. Most notably, the growth of PyAirbyte and the rapid shifts around the needs of generative AI stacks (vector stores, unstructured data processing, etc.). How has that informed your product development and positioning? ------The Python ecosystem, and in particular data-oriented Python, has also undergone substantial evolution. What are the developments in the libraries and frameworks that you have been able to benefit from? ---What are some of the notable investments that you have made in the developer experience for building dlt pipelines? ------How have the interfaces for source/destination development improved? ---You recently published a post about the idea of a portable data lake. What are the missing pieces that would make that possible, and what are the developments/technologies that put that idea within reach? ---What is your strategy for building a sustainable product on top of dlt? ------How does that strategy help to form a "virtuous cycle" of improving the open source foundation? ---What are the most interesting, innovative, or unexpected ways that you have seen dlt used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on dlt? ---When is dlt the wrong choice? ---What do you have planned for the future of dlt/dlthub? Contact Info ---Adrian ------ [LinkedIn] (https://www.linkedin.com/in/data-team/?originalSubdomain=de) ---Marcin ------ [LinkedIn] (https://www.linkedin.com/in/marcinrudolf/?originalSubdomain=de) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [dlt] (dlthub.com) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/dlt-data-integration-library-episode-390) --- [PyArrow] (https://arrow.apache.org/docs/python/) --- [Polars] (https://docs.pola.rs/) --- [Ibis] (https://ibis-project.org/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [dlt Data Contracts] (https://dlthub.com/docs/general-usage/schema-contracts) --- [RAG == Retrieval Augmented Generation] (https://github.blog/ai-and-ml/generative-ai/what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) ------ [AI Engineering Podcast Episode] (https://www.aiengineeringpodcast.com/retrieval-augmented-generation-implementation-episode-34) --- [PyAirbyte] (https://docs.airbyte.com/using-airbyte/pyairbyte/getting-started) --- [OpenAI o1 Model] (https://openai.com/o1/) --- [LanceDB] (https://lancedb.com/) --- [QDrant Embedded] (https://qdrant.tech/) --- [Airflow] (https://airflow.apache.org/) --- [GitHub Actions] (https://github.com/features/actions) --- [Arrow DataFusion] (https://datafusion.apache.org/) --- [Apache Arrow] (https://arrow.apache.org/) --- [PyIceberg] (https://py.iceberg.apache.org/) --- [Delta-RS] (https://github.com/delta-io/delta-rs) --- [SCD2 == Slowly Changing Dimensions] (https://dlthub.com/docs/general-usage/incremental-loading#scd2-strategy) --- [SQLAlchemy] (https://www.sqlalchemy.org/) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [FSSpec] (https://github.com/fsspec/) --- [Pydantic] (https://docs.pydantic.dev/latest/) --- [Spacy] (https://spacy.io/) --- [Entity Recognition] (https://en.wikipedia.org/wiki/Named-entity_recognition) --- [Parquet File Format] (https://parquet.apache.org/) --- [Python Decorator] (https://book.pythontips.com/en/latest/decorators.html) --- [REST API Toolkit] (https://dlthub.com/blog/rest-api-source-client) --- [OpenAPI Connector Generator] (https://dlthub.com/docs/dlt-ecosystem/verified-sources/openapi-generator) --- [ConnectorX] (https://github.com/sfu-db/connector-x) --- [Python no-GIL] (https://www.blog.pythonlibrary.org/2024/03/14/python-3-13-allows-disabling-of-the-gil-subinterpreters/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [Hamilton] (https://github.com/DAGWorks-Inc/hamilton) --- [Tabular] (https://www.tabular.io/) --- [PostHog] (https://posthog.com/) ------ [Podcast.__init__ Episode] (https://www.pythonpodcast.com/episodepage/open-source-product-analytics-with-posthog) --- [AsyncIO] (https://docs.python.org/3/library/asyncio.html) --- [Cursor.AI] (https://www.cursor.com/) --- [Data Mesh] (https://www.datamesh-architecture.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/episodepage/straining-your-data-lake-through-a-data-mesh) --- [FastAPI] (https://fastapi.tiangolo.com/) --- [LangChain] (https://www.langchain.com/) --- [GraphRAG] (https://neo4j.com/blog/graphrag-manifesto/) ------ [AI Engineering Podcast Episode] (https://www.aiengineeringpodcast.com/graphrag-knowledge-graph-semantic-retrieval-episode-37) --- [Property Graph] (https://en.wikipedia.org/wiki/Property_graph) --- [Python uv] (https://docs.astral.sh/uv/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

13 Oct 2024

54 MINS

54:08

13 Oct 2024


#441

Build Your Data Transformations Faster And Safer With SDF

SummaryIn this episode of the Data Engineering Podcast Lukas Schulte, co-founder and CEO of SDF, explores the development and capabilities of this fast and expressive SQL transformation tool. From its origins as a solution for addressing data privacy, governance, and quality concerns in modern data management, to its unique features like static analysis and type correctness, Lucas dives into what sets SDF apart from other tools like DBT and SQL Mesh. Tune in for insights on building a business around a developer tool, the importance of community and user experience in the data engineering ecosystem, and plans for future development, including supporting Python models and enhancing execution capabilities.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Lukas Schulte about SDF, a fast and expressive SQL transformation tool that understands your schema Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what SDF is and the story behind it? ------What's the story behind the name? ---What problem are you solving with SDF? ------dbt has been the dominant player for SQL-based transformations for several years, with other notable competition in the form of SQLMesh. Can you give an overview of the venn diagram for features and functionality across SDF, dbt and SQLMesh? ---Can you describe the design and implementation of SDF? ------How have the scope and goals of the project changed since you first started working on it? ---What does the development experience look like for a team working with SDF? ------How does that differ between the open and paid versions of the product? ---What are the features and functionality that SDF offers to address intra- and inter-team collaboration? ---One of the challenges for any second-mover technology with an established competitor is the adoption/migration path for teams who have already invested in the incumbent (dbt in this case). How are you addressing that barrier for SDF? ------Beyond the core migration path of the direct functionality of the incumbent product is the amount of tooling and communal knowledge that grows up around that product. How are you thinking about that aspect of the current landscape? ---What is your governing principle for what capabilities are in the open core and which go in the paid product? ---What are the most interesting, innovative, or unexpected ways that you have seen SDF used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on SDF? ---When is SDF the wrong choice? ---What do you have planned for the future of SDF? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/lukas-schulte-a6b16254/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Links --- [SDF] (https://www.sdf.com/) --- [Semantic Data Warehouse] (https://www.datacamp.com/blog/semantic-layer) --- [asdf-vm] (https://asdf-vm.com/) --- [dbt] (https://www.getdbt.com/) --- [Software Linting] (https://en.wikipedia.org/wiki/Lint_(software) ) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [Coalesce] (https://coalesce.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/coalesce-enterprise-analytics-transformations-episode-278) --- [Apache Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [SDF Classifiers] (https://docs.sdf.com/guide/basics/classifiers) --- [dbt Semantic Layer] (https://docs.getdbt.com/docs/build/semantic-models) --- [dbt expectations] (https://hub.getdbt.com/calogica/dbt_expectations/latest/) --- [Apache Datafusion] (https://datafusion.apache.org/) --- [Ibis] (https://ibis-project.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

06 Oct 2024

42 MINS

42:36

06 Oct 2024


#440

Scaling Airbyte: Challenges and Milestones on the Road to 1.0

SummaryAirbyte is one of the most prominent platforms for data movement. Over the past 4 years they have invested heavily in solutions for scaling the self-hosted and cloud operations, as well as the quality and stability of their connectors. As a result of that hard work, they have declared their commitment to the future of the platform with a 1.0 release. In this episode Michel Tricot shares the highlights of their journey and the exciting new capabilities that are coming next.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Your host is Tobias Macey and today I'm interviewing Michel Tricot about the journey to the 1.0 launch of Airbyte and what that means for the project Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Airbyte is and the story behind it? ---What are some of the notable milestones that you have traversed on your path to the 1.0 release? ---The ecosystem has gone through some significant shifts since you first launched Airbyte. How have trends such as generative AI, the rise and fall of the "modern data stack", and the shifts in investment impacted your overall product and business strategies? ---What are some of the hard-won lessons that you have learned about the realities of data movement and integration? ------What are some of the most interesting/challenging/surprising edge cases or performance bottlenecks that you have had to address? ---What are the core architectural decisions that have proven to be effective? ------How has the architecture had to change as you progressed to the 1.0 release? ---A 1.0 version signals a degree of stability and commitment. Can you describe the decision process that you went through in committing to a 1.0 version? ---What are the most interesting, innovative, or unexpected ways that you have seen Airbyte used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airbyte? ---When is Airbyte the wrong choice? ---What do you have planned for the future of Airbyte after the 1.0 launch? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/micheltricot/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [Airbyte] (https://airbyte.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173) --- [Airbyte Cloud] (https://airbyte.com/product/airbyte-cloud) --- [Airbyte Connector Builder] (https://airbyte.com/product/connector-development-kit) --- [Singer Protocol] (https://www.singer.io/) --- [Airbyte Protocol] (https://docs.airbyte.com/understanding-airbyte/airbyte-protocol) --- [Airbyte CDK] (https://docs.airbyte.com/connector-development/cdk-python/) --- [Modern Data Stack] (https://www.moderndatastack.xyz/) --- [ELT] (https://en.wikipedia.org/wiki/Extract,_load,_transform) --- [Vector Database] (https://en.wikipedia.org/wiki/Vector_database) --- [dbt] (https://www.getdbt.com/) --- [Fivetran] (https://www.fivetran.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/fivetran-data-replication-episode-93) --- [Meltano] (https://meltano.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/meltano-data-integration-episode-141) --- [dlt] (https://dlthub.com/docs/intro) --- [Reverse ETL] (https://medium.com/memory-leak/reverse-etl-a-primer-4e6694dcc7fb) --- [GraphRAG] (https://neo4j.com/blog/graphrag-manifesto/) ------ [AI Engineering Podcast Episode] (https://www.aiengineeringpodcast.com/graphrag-knowledge-graph-semantic-retrieval-episode-37) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

23 Sep 2024

57 MINS

57:11

23 Sep 2024


#439

Enhancing Data Accessibility and Governance with Gravitino

SummaryAs data architectures become more elaborate and the number of applications of data increases, it becomes increasingly challenging to locate and access the underlying data. Gravitino was created to provide a single interface to locate and query your data. In this episode Junping Du explains how Gravitino works, the capabilities that it unlocks, and how it fits into your data platform.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Your host is Tobias Macey and today I'm interviewing Junping Du about Gravitino, an open source metadata service for a unified view of all of your schemas Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Gravitino is and the story behind it? ---What problems are you solving with Gravitino? ------What are the methods that teams have relied on in the absence of Gravitino to address those use cases? ---What led to the Hive Metastore being the default for so long? ------What are the opportunities for innovation and new functionality in the metadata service? ---The documentation suggests that Gravitino has overlap with a number of tool categories such as table schema (Hive metastore), metadata repository (Open Metadata), data federation (Trino/Alluxio). What are the capabilities that it can completely replace, and which will require other systems for more comprehensive functionality? ---What are the capabilities that you are explicitly keeping out of scope for Gravitino? ---Can you describe the technical architecture of Gravitino? ------How have the design and scope evolved from when you first started working on it? ---Can you describe how Gravitino integrates into an overall data platform? ------In a typical day, what are the different ways that a data engineer or data analyst might interact with Gravitino? ---One of the features that you highlight is centralized permissions management. Can you describe the access control model that you use for unifying across underlying sources? ---What are the most interesting, innovative, or unexpected ways that you have seen Gravitino used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gravitino? ---When is Gravitino the wrong choice? ---What do you have planned for the future of Gravitino? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/junping-du/) --- [GitHub] (https://github.com/JunpingDu) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Gravitino] (https://gravitino.apache.org/) --- [Hadoop] (https://hadoop.apache.org) --- [Datastrato] (https://datastrato.ai/) --- [PyTorch] (https://pytorch.org/) --- [Ray] (https://www.ray.io/) --- [Data Fabric] (https://www.gartner.com/en/data-analytics/topics/data-fabric) --- [Hive] (https://hive.apache.org/) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52) --- [Hive Metastore] (https://cwiki.apache.org/confluence/display/hive/design#Design-Metastore) --- [Trino] (https://trino.io/) --- [OpenMetadata] (https://open-metadata.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/openmetadata-universal-metadata-layer-episode-237/) --- [Alluxio] (https://www.alluxio.io/) --- [Atlan] (https://atlan.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/atlan-data-team-collaboration-episode-179) --- [Spark] (https://spark.apache.org/) --- [Thrift] (https://thrift.apache.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

01 Sep 2024

38 MINS

38:41

01 Sep 2024


#438

The Evolution of DataOps: Insights from DataKitchen's CEO

SummaryIn this episode of the Data Engineering Podcast, host Tobias Macey welcomes back Chris Berg, CEO of DataKitchen, to discuss his ongoing mission to simplify the lives of data engineers. Chris explains the challenges faced by data engineers, such as constant system failures, the need for rapid changes, and high customer demands. Chris delves into the concept of DataOps, its evolution, and the misappropriation of related terms like data mesh and data observability. He emphasizes the importance of focusing on processes and systems rather than just tools to improve data engineering workflows. Chris also introduces DataKitchen's open-source tools, DataOps TestGen and DataOps Observability, designed to automate data quality validation and monitor data journeys in production.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Chris Bergh about his tireless quest to simplify the lives of data engineers Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what DataKitchen is and the story behind it? ---You helped to define and popularize "DataOps", which then went through a journey of misappropriation similar to "DevOps", and has since faded in use. What is your view on the realities of "DataOps" today? ---Out of the popularized wave of "DataOps" tools came subsequent trends in data observability, data reliability engineering, etc. How have those cycles influenced the way that you think about the work that you are doing at DataKitchen? ---The data ecosystem went through a massive growth period over the past ~7 years, and we are now entering a cycle of consolidation. What are the fundamental shifts that we have gone through as an industry in the management and application of data? ---What are the challenges that never went away? ---You recently open sourced the dataops-testgen and dataops-observability tools. What are the outcomes that you are trying to produce with those projects? ---What are the areas of overlap with existing tools and what are the unique capabilities that you are offering? ---Can you talk through the technical implementation of your new obserability and quality testing platform? ---What does the onboarding and integration process look like? ---Once a team has one or both tools set up, what are the typical points of interaction that they will have over the course of their workday? ---What are the most interesting, innovative, or unexpected ways that you have seen dataops-observability/testgen used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on promoting DataOps? ---What do you have planned for the future of your work at DataKitchen? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/chrisbergh/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Links --- [DataKitchen] (https://datakitchen.io/) --- [Podcast Episode] (https://www.dataengineeringpodcast.com/episodepage/datakitchen-dataops-with-chris-bergh-episode-26) --- [NASA] (https://www.nasa.gov/ames/core-area-of-expertise-air-traffic-management/) --- [DataOps Manifesto] (https://dataopsmanifesto.org/en/) --- [Data Reliability Engineering] (https://thenewstack.io/its-time-for-data-reliability-engineering/?utm_referrer=https%3A%2F%2Fwww.google.com%2F) --- [Data Observability] (https://www.ibm.com/topics/data-observability) --- [dbt] (https://www.getdbt.com/) --- [DevOps Enterprise Summit] (https://itrevolution.com/product/enterprise-technology-leadership-summit-las-vegas-2024/) --- [Building The Data Warehouse] (https://amzn.to/46BsRSo) by Bill Inmon (affiliate link) --- [dataops-testgen, dataops-observability] (https://github.com/DataKitchen/data-observability-installer) --- [Free Data Quality and Data Observability Certification] (https://info.datakitchen.io/data-observability-and-data-quality-testing-certification) --- [Databricks] (https://www.databricks.com/) --- [DORA Metrics] (https://dora.dev/) --- [DORA for data] (https://datakitchen.io/two-downs-make-two-ups-the-only-success-metrics-that-matter-for-your-data-analytics-team/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

04 Aug 2024

53 MINS

53:30

04 Aug 2024


#437

Achieving Data Reliability: The Role of Data Contracts in Modern Data Management

SummaryData contracts are both an enforcement mechanism for data quality, and a promise to downstream consumers. In this episode Tom Baeyens returns to discuss the purpose and scope of data contracts, emphasizing their importance in achieving reliable analytical data and preventing issues before they arise. He explains how data contracts can be used to enforce guarantees and requirements, and how they fit into the broader context of data observability and quality monitoring. The discussion also covers the challenges and benefits of implementing data contracts, the organizational impact, and the potential for standardization in the field.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---At Outshift, the incubation engine from Cisco, they are driving innovation in AI, cloud, and quantum technologies with the powerful combination of enterprise strength and startup agility. Their latest innovation for the AI ecosystem is Motific, addressing a critical gap in going from prototype to production with generative AI. Motific is your vendor and model-agnostic platform for building safe, trustworthy, and cost-effective generative AI solutions in days instead of months. Motific provides easy integration with your organizational data, combined with advanced, customizable policy controls and observability to help ensure compliance throughout the entire process. Move beyond the constraints of traditional AI implementation and ensure your projects are launched quickly and with a firm foundation of trust and efficiency. Go to [motific.ai] (https://motifica.ai) today to learn more! ---Your host is Tobias Macey and today I'm interviewing Tom Baeyens about using data contracts to build a clearer API for your data Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe the scope and purpose of data contracts in the context of this conversation? ---In what way(s) do they differ from data quality/data observability? ---Data contracts are also known as the API for data, can you elaborate on this? ---What are the types of guarantees and requirements that you can enforce with these data contracts? ---What are some examples of constraints or guarantees that cannot be represented in these contracts? ---Are data contracts related to the shift-left? ---Data contracts are also known as the API for data, can you elaborate on this? ---The obvious application of data contracts are in the context of pipeline execution flows to prevent failing checks from propagating further in the data flow. What are some of the other ways that these contracts can be integrated into an organization's data ecosystem? ---How did you approach the design of the syntax and implementation for Soda's data contracts? ---Guarantees and constraints around data in different contexts have been implemented in numerous tools and systems. What are the areas of overlap in e.g. dbt, great expectations? ---Are there any emerging standards or design patterns around data contracts/guarantees that will help encourage portability and integration across tooling/platform contexts? ---What are the most interesting, innovative, or unexpected ways that you have seen data contracts used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data contracts at Soda? ---When are data contracts the wrong choice? ---What do you have planned for the future of data contracts? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/tombaeyens/?originalSubdomain=be) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Soda] (https://www.soda.io/) --- [Podcast Episode] (https://www.dataengineeringpodcast.com/soda-data-quality-management-episode-178) --- [JBoss] (https://en.wikipedia.org/wiki/JBoss_Enterprise_Application_Platform) --- [Data Contract] (https://datacreation.substack.com/p/what-is-and-what-isnt-a-data-contract) --- [Airflow] (https://airflow.apache.org/) --- [Unit Testing] (https://en.wikipedia.org/wiki/Unit_testing) --- [Integration Testing] (https://en.wikipedia.org/wiki/Integration_testing) --- [OpenAPI] (https://www.openapis.org/) --- [GraphQL] (https://graphql.org/) --- [Circuit Breaker Pattern] (https://martinfowler.com/bliki/CircuitBreaker.html) --- [SodaCL] (https://docs.soda.io/soda/quick-start-sodacl.html) --- [Soda Data Contracts] (https://docs.soda.io/soda/data-contracts.html) --- [Data Mesh] (https://www.datamesh-architecture.com/) --- [Great Expectations] (https://greatexpectations.io/) --- [dbt Unit Tests] (https://docs.getdbt.com/docs/build/unit-tests) --- [Open Data Contracts] (https://opendatacontract.com/) --- [ODCS == Open Data Contract Standard] (https://bitol-io.github.io/open-data-contract-standard/latest/) --- [ODPS == Open Data Product Specification] (https://opendataproducts.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

28 Jul 2024

49 MINS

49:26

28 Jul 2024


#436

How Generative AI Is Impacting Data Engineering Teams

SummaryGenerative AI has rapidly gained adoption for numerous use cases. To support those applications, organizational data platforms need to add new features and data teams have increased responsibility. In this episode Lior Gavish, co-founder of Monte Carlo, discusses the various ways that data teams are evolving to support AI powered features and how they are incorporating AI into their work.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Lior Gavish about the impact of AI on data engineers Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by clarifying what we are discussing when we say "AI"? ---Previous generations of machine learning (e.g. deep learning, reinforcement learning, etc.) required new features in the data platform. What new demands is the current generation of AI introducing? ---Generative AI also has the potential to be incorporated in the creation/execution of data pipelines. What are the risk/reward tradeoffs that you have seen in practice? ------What are the areas where LLMs have proven useful/effective in data engineering? ---Vector embeddings have rapidly become a ubiquitous data format as a result of the growth in retrieval augmented generation (RAG) for AI applications. What are the end-to-end operational requirements to support this use case effectively? ------As with all data, the reliability and quality of the vectors will impact the viability of the AI application. What are the different failure modes/quality metrics/error conditions that they are subject to? ---As much as vectors, vector databases, RAG, etc. seem exotic and new, it is all ultimately shades of the same work that we have been doing for years. What are the areas of overlap in the work required for running the current generation of AI, and what are the areas where it diverges? ------What new skills do data teams need to acquire to be effective in supporting AI applications? ---What are the most interesting, innovative, or unexpected ways that you have seen AI impact data engineering teams? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working with the current generation of AI? ---When is AI the wrong choice? ---What are your predictions for the future impact of AI on data engineering teams? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/lgavish/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your Links --- [Monte Carlo] (https://www.montecarlodata.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/monte-carlo-observability-data-quality-episode-155) --- [NLP == Natural Language Processing] (https://en.wikipedia.org/wiki/Natural_language_processing) --- [Large Language Models] (https://en.wikipedia.org/wiki/Large_language_model) --- [Generative AI] (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) --- [MLOps] (https://en.wikipedia.org/wiki/MLOps) --- [ML Engineer] (https://www.coursera.org/articles/what-is-machine-learning-engineer) --- [Feature Store] (https://www.featurestore.org/what-is-a-feature-store) --- [Retrieval Augmented Generation (RAG)] (https://github.blog/2024-04-04-what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) --- [Langchain] (https://www.langchain.com/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

21 Jul 2024

54 MINS

54:45

21 Jul 2024


#435

The Role of Product Managers in Data-Centric Organizations

SummaryIn this episode Praveen Gujar, Director of Product at LinkedIn, talks about the intricacies of product management for data and analytical platforms. Praveen shares his journey from Amazon to Twitter and now LinkedIn, highlighting his extensive experience in building data products and platforms, digital advertising, AI, and cloud services. He discusses the evolving role of product managers in data-centric environments, emphasizing the importance of clean, reliable, and compliant data. Praveen also delves into the challenges of building scalable data platforms, the need for organizational and cultural alignment, and the critical role of product managers in bridging the gap between engineering and business teams. He provides insights into the complexities of platformization, the significance of long-term planning, and the necessity of having a strong relationship with engineering teams. The episode concludes with Praveen offering advice for aspiring product managers and discussing the future of data management in the context of AI and regulatory compliance.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Praveen Gujar about product management for data and analytical platforms Interview ---Introduction ---How did you get involved in the area of data management? ---Product management is typically thought of as being oriented toward customer facing functionality and features. What is involved in being a product manager for data systems? ---Many data-oriented products that are customer facing require substantial technical capacity to serve those use cases. How does that influence the process of determining what features to provide/create? ---investment in technical capacity/platforms ---identifying groupings of features that can be served by a common platform investment ---managing organizational pressures between engineering, product, business, finance, etc. ---What are the most interesting, innovative, or unexpected ways that you have seen "Data Products & Platforms @ Big-tech" used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on "Building Data Products & Platforms for Big-tech"? ---When is "Data Products & Platforms @ Big-tech" the wrong choice? ---What do you have planned for the future of "Data Products & Platforms @ Big-tech"? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/praveengujar/) --- [Website] (https://about.me/praveen.gujar) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [DataHub] (https://datahubproject.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/datahub-metadata-management-episode-147) --- [RAG == Retrieval Augmented Generation] (https://github.blog/2024-04-04-what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

13 Jul 2024

52 MINS

52:58

13 Jul 2024


#434

Neon: A Serverless And Developer Friendly Postgres

SummaryPostgres is one of the most widely respected and liked database engines ever. To make it even easier to use for developers to use, Nikita Shamgunov decided to makee it serverless, so that it can scale from zero to infinity. In this episode he explains the engineering involved to make that possible, as well as the numerous details that he and his team are packing into the Neon service to make it even more attractive for anyone who wants to build on top of Postgres.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Nikita Shamgunov about his work on making Postgres a serverless database at Neon. Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Neon is and the story behind it? ------The ecosystem around Postgres is large and varied. What are the pain points that you are trying to address with Neon? ---What does it mean for a database to be serverless? ------What kinds of products and services are unlocked by making Postgres a serverless database? ---How does your vision for Neon compare/contrast with what you know of PlanetScale? ---Postgres is known for having a large ecosystem of plugins that add a lot of interesting and useful features, but the storage layer has not been as easily extensible historically. How have architectural changes in recent Postgres releases enabled your work on Neon? ---What are the core pieces of engineering that you have had to complete to make Neon possible? ------How have the design and goals of the project evolved since you first started working on it? ---The separation of storage and compute is one of the most fundamental promises of the cloud. What new capabilities does that enable in Postgres? ------How does the branching functionality change the ways that development teams are able to deliver and debug features? ---Because the storage is now a networked system, what new performance/latency challenges does that introduce? How have you addressed them in Neon? ---Anyone who has ever operated a Postgres instance has had to tackle the upgrade process. How does Neon address that process for end users? ---The rampant growth of AI has touched almost every aspect of computing, and Postgres is no exception. How does the introduction of pgvector and semantic/similarity search functionality impact the adoption and usage patterns of Postgres/Neon? ------What new challenges does that introduce for you as an operator and business owner? ---What are the lessons that you learned from MemSQL/SingleStore that have been most helpful in your work at Neon? ---What are the most interesting, innovative, or unexpected ways that you have seen Neon used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Neon? ---When is Neon the wrong choice? Postgres? ---What do you have planned for the future of Neon? Contact Info --- [@nikitabase] (https://x.com/nikitabase) on Twitter --- [LinkedIn] (https://www.linkedin.com/in/nikitashamgunov/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Neon] (https://neon.tech/) --- [PostgreSQL] (https://www.postgresql.org/) --- [Neon Github] (https://github.com/neondatabase/neon) --- [PHP] (https://www.php.net/) --- [MySQL] (https://www.mysql.com/) --- [SQL Server] (https://en.wikipedia.org/wiki/Microsoft_SQL_Server) --- [SingleStore] (https://www.singlestore.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/memsql-with-nikita-shamgunov-episode-51) --- [AWS Aurora] (https://aws.amazon.com/rds/aurora/) --- [Khosla Ventures] (https://www.khoslaventures.com/) --- [YugabyteDB] (https://www.yugabyte.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/yugabytedb-planet-scale-sql-episode-115) --- [CockroachDB] (https://www.cockroachlabs.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/cockroachdb-with-peter-mattis-episode-35) --- [PlanetScale] (https://planetscale.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/planetscale-serverless-mysql-episode-349) --- [Clickhouse] (https://clickhouse.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270) --- [WAL == Write-Ahead Log] (https://en.wikipedia.org/wiki/Write-ahead_logging) --- [PgBouncer] (https://www.pgbouncer.org/) --- [PureStorage] (https://www.purestorage.com/) --- [Paxos] (https://en.wikipedia.org/wiki/Paxos_(computer_science) ) --- [HNSW Index] (https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) --- [IVF Flat Index] (https://www.timescale.com/blog/nearest-neighbor-indexes-what-are-ivfflat-indexes-in-pgvector-and-how-do-they-work/) --- [RAG == Retrieval Augmented Generation] (https://github.blog/2024-04-04-what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) --- [AlloyDB] (https://cloud.google.com/alloydb) --- [Neon Serverless Driver] (https://neon.tech/docs/serverless/serverless-driver) --- [Devin] (https://preview.devin.ai/) --- [magic.dev] (https://magic.dev/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

08 Jul 2024

57 MINS

57:43

08 Jul 2024


#433

Improve Data Quality Through Engineering Rigor And Business Engagement With Synq

SummaryThis episode features an insightful conversation with Petr Janda, the CEO and founder of Synq. Petr shares his journey from being an engineer to founding Synq, emphasizing the importance of treating data systems with the same rigor as engineering systems. He discusses the challenges and solutions in data reliability, including the need for transparency and ownership in data systems. Synq's platform helps data teams manage incidents, understand data dependencies, and ensure data quality by providing insights and automation capabilities. Petr emphasizes the need for a holistic approach to data reliability, integrating data systems into broader business processes. He highlights the role of data teams in modern organizations and how Synq is empowering them to achieve this.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Petr Janda about Synq, a data reliability platform focused on leveling up data teams by supporting a culture of engineering rigor Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Synq is and the story behind it? ------Data observability/reliability is a category that grew rapidly over the past ~5 years and has several vendors focused on different elements of the problem. What are the capabilities that you saw as lacking in the ecosystem which you are looking to address? ---Operational/infrastructure engineers have spent the past decade honing their approach to incident management and uptime commitments. How do those concepts map to the responsibilities and workflows of data teams? ------Tooling only plays a small part in SLAs and incident management. How does Synq help to support the cultural transformation that is necessary? ---What does an on-call rotation for a data engineer/data platform engineer look like as compared with an application-focused team? ---How does the focus on data assets/data products shift your approach to observability as compared to a table/pipeline centric approach? ---With the focus on sharing ownership beyond the boundaries on the data team there is a strong correlation with data governance principles. How do you see organizations incorporating Synq into their approach to data governance/compliance? ---Can you describe how Synq is designed/implemented? ------How have the scope and goals of the product changed since you first started working on it? ---For a team who is onboarding onto Synq, what are the steps required to get it integrated into their technology stack and workflows? ---What are the types of incidents/errors that you are able to identify and alert on? ------What does a typical incident/error resolution process look like with Synq? ---What are the most interesting, innovative, or unexpected ways that you have seen Synq used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Synq? ---When is Synq the wrong choice? ---What do you have planned for the future of Synq? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/petr-janda/?originalSubdomain=dk) --- [Substack] (https://substack.com/@petrjanda) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [Synq] (https://www.synq.io/) --- [Incident Management] (https://www.pagerduty.com/resources/learn/what-is-incident-management/) --- [SLA == Service Level Agreement] (https://en.wikipedia.org/wiki/Service-level_agreement) --- [Data Governance] (https://en.wikipedia.org/wiki/Data_governance) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/nicola-askham-practical-data-governance-episode-428) --- [PagerDuty] (https://www.pagerduty.com/) --- [OpsGenie] (https://www.atlassian.com/software/opsgenie) --- [Clickhouse] (https://clickhouse.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/) --- [dbt] (https://www.getdbt.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

30 Jun 2024

59 MINS

59:48

30 Jun 2024


#432

Stitching Together Enterprise Analytics With Microsoft Fabric

Summary ------- Data lakehouse architectures have been gaining significant adoption. To accelerate adoption in the enterprise Microsoft has created the Fabric platform, based on their OneLake architecture. In this episode Dipti Borkar shares her experiences working on the product team at Fabric and explains the various use cases for the Fabric service. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Dipti Borkar about her work on Microsoft Fabric and performing analytics on data withou Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Microsoft Fabric is and the story behind it? ---Data lakes in various forms have been gaining significant popularity as a unified interface to an organization&#39;s analytics. What are the motivating factors that you see for that trend? ---Microsoft has been investing heavily in open source in recent years, and the Fabric platform relies on several open components. What are the benefits of layering on top of existing technologies rather than building a fully custom solution? ------What are the elements of Fabric that were engineered specifically for the service? ------What are the most interesting/complicated integration challenges? ---How has your prior experience with Ahana and Presto informed your current work at Microsoft? ---AI plays a substantial role in the product. What are the benefits of embedding Copilot into the data engine? ------What are the challenges in terms of safety and reliability? ---What are the most interesting, innovative, or unexpected ways that you have seen the Fabric platform used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data lakes generally, and Fabric specifically? ---When is Fabric the wrong choice? ---What do you have planned for the future of data lake analytics? Contact Info ------------ --- [LinkedIn] (https://www.linkedin.com/in/diptiborkar/) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) with your story. Links ----- --- [Microsoft Fabric] (https://www.microsoft.com/microsoft-fabric) --- [Ahana episode] (https://www.dataengineeringpodcast.com/ahana-presto-cloud-data-lake-episode-217) --- [DB2 Distributed] (https://www.ibm.com/docs/en/db2/11.5?topic=managers-designing-distributed-databases) --- [Spark] (https://spark.apache.org/) --- [Presto] (https://prestodb.io/) --- [Azure Data] (https://azure.microsoft.com/en-us/products#analytics) --- [MAD Landscape] (https://mattturck.com/mad2024/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/mad-landscape-2023-data-infrastructure-episode-369) ------ [ML Podcast Episode] (https://www.themachinelearningpodcast.com/mad-landscape-2023-ml-ai-episode-21) --- [Tableau] (https://www.tableau.com/) --- [dbt] (https://www.getdbt.com/) --- [Medallion Architecture] (https://dataengineering.wiki/Concepts/Medallion+Architecture) --- [Microsoft Onelake] (https://learn.microsoft.com/fabric/onelake/onelake-overview) --- [ORC] (https://orc.apache.org/) --- [Parquet] (https://parquet.incubator.apache.org) --- [Avro] (https://avro.apache.org/) --- [Delta Lake] (https://delta.io/) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [Hudi] (https://hudi.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/hudi-streaming-data-lake-episode-209) --- [Hadoop] (https://hadoop.apache.org/) --- [PowerBI] (https://www.microsoft.com/power-platform/products/power-bi) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/power-bi-business-intelligence-episode-154) --- [Velox] (https://velox-lib.io/) --- [Gluten] (https://gluten.apache.org/) --- [Apache XTable] (https://xtable.apache.org/) --- [GraphQL] (https://graphql.org/) --- [Formula 1] (https://www.formula1.com/) --- [McLaren] (https://www.mclaren.com/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

23 Jun 2024

53 MINS

53:23

23 Jun 2024


#431

Being Data Driven At Stripe With Trino And Iceberg

Summary ------- Stripe is a company that relies on data to power their products and business. To support that functionality they have invested in Trino and Iceberg for their analytical workloads. In this episode Kevin Liu shares some of the interesting features that they have built by combining those technologies, as well as the challenges that they face in supporting the myriad workloads that are thrown at this layer of their data platform. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Kevin Liu about his use of Trino and Iceberg for Stripe&#39;s data lakehouse Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you describe what role Trino and Iceberg play in Stripe&#39;s data architecture? ------What are the ways in which your job responsibilities intersect with Stripe&#39;s lakehouse infrastructure? ---What were the requirements and selection criteria that led to the selection of that combination of technologies? ------What are the other systems that feed into and rely on the Trino/Iceberg service? ---what kinds of questions are you answering with table metadata ------what use case/team does that support ---comparative utility of iceberg REST catalog ---What are the shortcomings of Trino and Iceberg? ---What are the most interesting, innovative, or unexpected ways that you have seen Iceberg/Trino used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Stripe&#39;s data infrastructure? ---When is a lakehouse on Trino/Iceberg the wrong choice? ---What do you have planned for the future of Trino and Iceberg at Stripe? Contact Info ------------ --- [Substack] (https://kevinjqliu.substack.com) --- [LinkedIn] (https://www.linkedin.com/in/kevinjqliu) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) with your story. Links ----- --- [Trino] (https://trino.io/) --- [Iceberg] (https://iceberg.apache.org/) --- [Stripe] (https://stripe.com/) --- [Spark] (https://spark.apache.org/) --- [Redshift] (https://aws.amazon.com/redshift/) --- [Hive Metastore] (https://cwiki.apache.org/confluence/display/hive/design#Design-Metastore) --- [Python Iceberg] (https://py.iceberg.apache.org/) --- [Python Iceberg REST Catalog] (https://github.com/kevinjqliu/iceberg-rest-catalog) --- [Trino Metadata Table] (https://trino.io/docs/current/connector/iceberg.html#metadata-tables) --- [Flink] (https://flink.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/apache-flink-with-fabian-hueske-episode-57) --- [Tabular] (https://tabular.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/tabular-iceberg-lakehouse-tables-episode-363) --- [Delta Table] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [Databricks Unity Catalog] (https://www.databricks.com/product/unity-catalog) --- [Starburst] (https://www.starburst.io/) --- [AWS Athena] (https://aws.amazon.com/athena/) --- [Kevin Trinofest Presentation] (https://trino.io/blog/2023/07/19/trino-fest-2023-stripe.html) --- [Alluxio] (https://www.alluxio.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/alluxio-distributed-storage-episode-70) --- [Parquet] (https://parquet.incubator.apache.org/) --- [Hudi] (https://hudi.apache.org/) --- [Trino Project Tardigrade] (https://trino.io/blog/2022/05/05/tardigrade-launch.html) --- [Trino On Ice] (https://www.starburst.io/blog/iceberg-table-partitioning/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

16 Jun 2024

53 MINS

53:20

16 Jun 2024


#430

X-Ray Vision For Your Flink Stream Processing With Datorios

Summary ------- Streaming data processing enables new categories of data products and analytics. Unfortunately, reasoning about stream processing engines is complex and lacks sufficient tooling. To address this shortcoming Datorios created an observability platform for Flink that brings visibility to the internals of this popular stream processing system. In this episode Ronen Korman and Stav Elkayam discuss how the increased understanding provided by purpose built observability improves the usefulness of Flink. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to [dataengineeringpodcast.com/codecomments] (https://www.dataengineeringpodcast.com/codecomments) today to subscribe. My thanks to the team at Code Comments for their support. ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Ronen Korman and Stav Elkayam about pulling back the curtain on your real-time data streams by bringing intuitive observability to Flink streams Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Datorios is and the story behind it? ---Data observability has been gaining adoption for a number of years now, with a large focus on data warehouses. What are some of the unique challenges posed by Flink? ------How much of the complexity is due to the nature of streaming data vs. the architectural realities of Flink? ---How has the lack of visibility into the flow of data in Flink impacted the ways that teams think about where/when/how to apply it? ---How have the requirements of generative AI shifted the demand for streaming data systems? ------What role does Flink play in the architecture of generative AI systems? ---Can you describe how Datorios is implemented? ------How has the design and goals of Datorios changed since you first started working on it? ---How much of the Datorios architecture and functionality is specific to Flink and how are you thinking about its potential application to other streaming platforms? ---Can you describe how Datorios is used in a day-to-day workflow for someone building streaming applications on Flink? ---What are the most interesting, innovative, or unexpected ways that you have seen Datorios used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datorios? ---When is Datorios the wrong choice? ---What do you have planned for the future of Datorios? Contact Info ------------ ---Ronen ------ [LinkedIn] (https://www.linkedin.com/in/ronen-korman/) ---Stav ------ [LinkedIn] (https://www.linkedin.com/in/stav-elkayam-118a2795/?originalSubdomain=il) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) with your story. Links ----- --- [Datorios] (https://datorios.com/) --- [Apache Flink] (https://flink.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/apache-flink-with-fabian-hueske-episode-57) --- [ChatGPT-4o] (https://openai.com/index/hello-gpt-4o/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Red Hat Code Comments Podcast] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) : [![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

09 Jun 2024

42 MINS

42:22

09 Jun 2024


#429

Practical First Steps In Data Governance For Long Term Success

Summary ------- Modern businesses aspire to be data driven, and technologists enjoy working through the challenge of building data systems to support that goal. Data governance is the binding force between these two parts of the organization. Nicola Askham found her way into data governance by accident, and stayed because of the benefit that she was able to provide by serving as a bridge between the technology and business. In this episode she shares the practical steps to implementing a data governance practice in your organization, and the pitfalls to avoid. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to [dataengineeringpodcast.com/codecomments] (https://www.dataengineeringpodcast.com/codecomments) today to subscribe. My thanks to the team at Code Comments for their support. ---Your host is Tobias Macey and today I&#39;m interviewing Nicola Askham about the practical steps of building out a data governance practice in your organization Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you start by giving an overview of the scope and boundaries of data governance in an organization? ------At what point does a lack of an explicit governance policy become a liability? ---What are some of the misconceptions that you encounter about data governance? ---What impact has the evolution of data technologies had on the implementation of governance practices? (e.g. number/scale of systems, types of data, AI) ---Data governance can often become an exercise in boiling the ocean. What are the concrete first steps that will increase the success rate of a governance practice? ------Once a data governance project is underway, what are some of the common roadblocks that might derail progress? ---What are the net benefits to the data team and the organization when a data governance practice is established, active, and healthy? ---What are the most interesting, innovative, or unexpected ways that you have seen data governance applied? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data governance/training/coaching? ---What are some of the pitfalls in data governance? ---What are some of the future trends in data governance that you are excited by? ------Are there any trends that concern you? Contact Info ------------ --- [Website] (https://www.nicolaaskham.com/) --- [LinkedIn] (https://www.linkedin.com/in/nicolaaskham/) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [Website] (https://www.nicolaaskham.com/) --- [Master Data Management] (https://en.wikipedia.org/wiki/Master_data_management) --- [Cartesian Join] (https://www.geeksforgeeks.org/cartesian-join/) --- [DAMA == Data Management Community] (https://www.dama.org/) --- [DMBOK == Data Management Body of Knowledge] (https://www.dama.org/cpages/body-of-knowledge) --- [DAMA DMBOK Wheel] (https://www.dama.org/cpages/dmbok-2-wheel-images) --- [CDMP (Certified Data Management Professional) Exam] (https://www.dama.org/cpages/cdmp-information) --- [Data Mesh] (https://www.datamesh-architecture.com/) --- [Data Governance First Steps Checklist] (https://www.nicolaaskham.com/free-data-governance-checklist) --- [The Never Normal] (https://www.linkedin.com/newsletters/the-never-normal-6862024032934477824/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Red Hat Code Comments Podcast] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) : [![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

02 Jun 2024

1 HR 00 MINS

1:00:41

02 Jun 2024


#428

Data Migration Strategies For Large Scale Systems

Summary ------- Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to [dataengineeringpodcast.com/codecomments] (https://www.dataengineeringpodcast.com/codecomments) today to subscribe. My thanks to the team at Code Comments for their support. ---Your host is Tobias Macey and today I&#39;m interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the process Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you start by sharing some of your experiences with data migration projects? ------As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems? ---How would you categorize the different types and motivations of migrations? ------How does the motivation for a migration influence the ways that you plan for and execute that work? ---Can you talk us through one or two specific projects that you have taken part in? ---Part 1: The Triggers ------Section 1: Technical Limitations triggering Data Migration ---------Scaling bottlenecks: Performance issues with databases, storage, or network infrastructure ---------Legacy compatibility: Difficulties integrating with modern tools and cloud platforms ---------System upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade) ------Section 2: Types of Migrations for Infrastructure Focus ---------Storage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.) ---------Data center migration: Physical relocation or consolidation of data centers ---------Virtualization migration: Moving from physical servers to virtual machines (or vice versa) ------Section 3: Technical Decisions Driving Data Migrations ---------End-of-life support: Forced migration when older software or hardware is sunsetted ---------Security and compliance: Adopting new platforms with better security postures ---------Cost Optimization: Potential savings of cloud vs. on-premise data centers ---Part 2: Challenges (and Anxieties) ------Section 1: Technical Challenges ---------Data transformation challenges: Schema changes, complex data mappings ---------Network bandwidth and latency: Transferring large datasets efficiently ---------Performance testing and load balancing: Ensuring new systems can handle the workload ---------Live data consistency: Maintaining data integrity while updates occur in the source system ---------Minimizing Lag: Techniques to reduce delays in replicating changes to the new system ---------Change data capture: Identifying and tracking changes to the source system during migration ------Section 2: Operational Challenges ---------Minimizing downtime: Strategies for service continuity during migration ---------Change management and rollback plans: Dealing with unexpected issues ---------Technical skills and resources: In-house expertise/data teams/external help ------Section 3: Security & Compliance Challenges ---------Data encryption and protection: Methods for both in-transit and at-rest data ---------Meeting audit requirements: Documenting data lineage & the chain of custody ---------Managing access controls: Adjusting identity and role-based access to the new systems ---Part 3: Patterns ------Section 1: Infrastructure Migration Strategies ---------Lift and shift: Migrating as-is vs. modernization and re-architecting during the move ---------Phased vs. big bang approaches: Tradeoffs in risk vs. disruption ---------Tools and automation: Using specialized software to streamline the process ---------Dual writes: Managing updates to both old and new systems for a time ---------Change data capture (CDC) methods: Log-based vs. trigger-based approaches for tracking changes ---------Data validation & reconciliation: Ensuring consistency between source and target ------Section 2: Maintaining Performance and Reliability ---------Disaster recovery planning: Failover mechanisms for the new environment ---------Monitoring and alerting: Proactively identifying and addressing issues ---------Capacity planning and forecasting growth to scale the new infrastructure ------Section 3: Data Consistency and Replication ---------Replication tools - strategies and specialized tooling ---------Data synchronization techniques, eg Pros and cons of different methods (incremental vs. full) ---------Testing/Verification Strategies for validating data correctness in a live environment ---------Implication of large scale systems/environments ---------Comparison of interesting strategies: ------------DBLog, Debezium, Databus, Goldengate etc ---What are the most interesting, innovative, or unexpected approaches to data migrations that you have seen or participated in? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data migrations? ---When is a migration the wrong choice? ---What are the characteristics or features of data technologies and the overall ecosystem that can reduce the burden of data migration in the future? Contact Info ------------ --- [LinkedIn] (https://www.linkedin.com/in/srirampanyam/) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [DagKnows] (https://dagknows.com) --- [Google Cloud Dataflow] (https://cloud.google.com/dataflow) --- [Seinfeld Risk Management] (https://www.youtube.com/watch) --- [ACL == Access Control List] (https://en.wikipedia.org/wiki/Access-control_list) --- [LinkedIn Databus - Change Data Capture] (https://github.com/linkedin/databus) --- [Espresso Storage] (https://engineering.linkedin.com/data-replication/open-sourcing-databus-linkedins-low-latency-change-data-capture-system) --- [HDFS] (https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) --- [Kafka] (https://kafka.apache.org/) --- [Postgres Replication Slots] (https://www.postgresql.org/docs/current/logical-replication.html) --- [Queueing Theory] (https://en.wikipedia.org/wiki/Queueing_theory) --- [Apache Beam] (https://beam.apache.org/) --- [Debezium] (https://debezium.io/) --- [Airbyte] (https://airbyte.com/) ---[Fivetran](fivetran.com) --- [Designing Data Intensive Applications] (https://amzn.to/4aAztR1) by [Martin Kleppman] (https://martin.kleppmann.com/) (affiliate link) --- [Vector Databases] (https://en.wikipedia.org/wiki/Vector_database) --- [Pinecone] (https://www.pinecone.io/) --- [Weaviate] (https://www.weveate.io/) --- [LAMP Stack] (https://en.wikipedia.org/wiki/LAMP_(software_bundle)) --- [Netflix DBLog] (https://arxiv.org/abs/2010.12597) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Red Hat Code Comments Podcast] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) : [![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

27 May 2024

1 HR 00 MINS

1:00:00

27 May 2024


#427

Zenlytic Is Building You A Better Coworker With AI Agents

Summary ------- The purpose of business intelligence systems is to allow anyone in the business to access and decode data to help them make informed decisions. Unfortunately this often turns into an exercise in frustration for everyone involved due to complex workflows and hard-to-understand dashboards. The team at Zenlytic have leaned on the promise of large language models to build an AI agent that lets you converse with your data. In this episode they share their journey through the fast-moving landscape of generative AI and unpack the difference between an AI chatbot and an AI agent. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to [dataengineeringpodcast.com/codecomments] (https://www.dataengineeringpodcast.com/codecomments) today to subscribe. My thanks to the team at Code Comments for their support. ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Ryan Janssen and Paul Blankley about their experiences building AI powered agents for interacting with your data Interview --------- ---Introduction ---How did you get involved in data? In AI? ---Can you describe what Zenlytic is and the role that AI is playing in your platform? ---What have been the key stages in your AI journey? ------What are some of the dead ends that you ran into along the path to where you are today? ------What are some of the persistent challenges that you are facing? ---So tell us more about data agents. Firstly, what are data agents and why do you think they&#39;re important? ---How are data agents different from chatbots? ---Are data agents harder to build? How do you make them work in production? ---What other technical architectures have you had to develop to support the use of AI in Zenlytic? ---How have you approached the work of customer education as you introduce this functionality? ---What are some of the most interesting or erroneous misconceptions that you have heard about what the AI can and can&#39;t do? ---How have you balanced accuracy/trustworthiness with user experience and flexibility in the conversational AI, given the potential for these models to create erroneous responses? ---What are the most interesting, innovative, or unexpected ways that you have seen your AI agent used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI agent for business intelligence? ---When is an AI agent the wrong choice? ---What do you have planned for the future of AI in the Zenlytic product? Contact Info ------------ ---Ryan ------ [LinkedIn] (https://www.linkedin.com/in/janssenryan) ---Paul ------ [LinkedIn] (https://www.linkedin.com/in/paulblankley/) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [Zenlytic] (https://www.zenlytic.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/zenlytic-self-serve-business-intelligence-episode-371) --- [Attention is all you need] (https://arxiv.org/abs/1706.03762) --- [Transformers] (https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)) --- [BERT] (https://en.wikipedia.org/wiki/BERT_(language_model)) --- [The Bitter Lesson] (http://www.incompleteideas.net/IncIdeas/BitterLesson.html) Richard Sutton --- [PID Loops] (https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller) --- [AutoGPT] (https://github.com/Significant-Gravitas/AutoGPT) --- [Devin.ai] (https://www.cognition.ai/introducing-devin) --- [Google Gemini] (https://gemini.google.com/) --- [Anthropic Claude] (https://www.anthropic.com/claude) --- [OpenAI Code Interpreter] (https://platform.openai.com/docs/assistants/tools/code-interpreter) --- [Edward Tufte] (https://www.edwardtufte.com/tufte/books_vdqi) --- [Looker ActionHub] (https://developers.looker.com/actions/overview/) --- [OAuth] (https://oauth.net/2/) --- [GitHub Copilot] (https://github.com/features/copilot) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Red Hat Code Comments Podcast] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) : [![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

19 May 2024

54 MINS

54:19

19 May 2024


#426

Release Management For Data Platform Services And Logic

Summary ------- Building a data platform is a substrantial engineering endeavor. Once it is running, the next challenge is figuring out how to address release management for all of the different component parts. The services and systems need to be kept up to date, but so does the code that controls their behavior. In this episode your host Tobias Macey reflects on his current challenges in this area and some of the factors that contribute to the complexity of the problem. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to [dataengineeringpodcast.com/codecomments] (https://www.dataengineeringpodcast.com/codecomments) today to subscribe. My thanks to the team at Code Comments for their support. ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I want to talk about my experiences managing the QA and release management process of my data platform Interview --------- ---Introduction ---As a team, our overall goal is to ensure that the production environment for our data platform is highly stable and reliable. This is the foundational element of establishing and maintaining trust with the consumers of our data. In order to support this effort, we need to ensure that only changes that have been tested and verified are promoted to production. ---Our current challenge is one that plagues all data teams. We want to have an environment that mirrors our production environment that is available for testing, but it’s not feasible to maintain a complete duplicate of all of the production data. Compounding that challenge is the fact that each of the components of our data platform interact with data in slightly different ways and need different processes for ensuring that changes are being promoted safely. Contact Info ------------ --- [LinkedIn] () --- [Website] (https://www.dataengineeringpodcast.com) Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) with your story. Links ----- --- [Data Platforms and Leaky Abstractions Episode] (https://www.dataengineeringpodcast.com/abstractions-and-technical-debt-episode-374) --- [Building A Data Platform From Scratch] (https://www.dataengineeringpodcast.com/designing-a-lakehouse-from-scratch-episode-354) --- [Airbyte] (https://airbyte.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/) --- [Trino] (https://trino.io/) --- [dbt] (https://www.getdbt.com/) --- [Starburst Galaxy] (https://www.starburst.io/platform/starburst-galaxy/) --- [Superset] (https://superset.apache.org/) --- [Dagster] (https://dagster.io/) --- [LakeFS] (https://lakefs.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/lakefs-data-lake-versioning-episode-157) --- [Nessie] (https://projectnessie.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/nessie-data-lakehouse-data-versioning-episode-416) --- [Iceberg] (https://iceberg.apache.org/) --- [Snowflake] (https://www.snowflake.com/en/) --- [LocalStack] (https://www.localstack.cloud/) --- [DSL == Domain Specific Language] (https://en.wikipedia.org/wiki/Domain-specific_language) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Red Hat Code Comments Podcast] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) : [![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).] (https://link.chtbl.com/codecomments?sid=podcast.dataengineering) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

12 May 2024

20 MINS

20:09

12 May 2024


#425

Barking Up The Wrong GPTree: Building Better AI With A Cognitive Approach

SummaryArtificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to [dataengineeringpodcast.com/dagster] (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human" Interview ---Introduction ---How did you get involved in machine learning? ---Can you start by unpacking the idea of "human-like" AI? ------How does that contrast with the conception of "AGI"? ---The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment? ---The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? ---What are the opportunities and limitations of causal modeling techniques for generalized AI models? ---As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability? ---What are the practical/architectural methods necessary to build more cognitive AI systems? ------How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications? ---What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems? ---When is cognitive AI the wrong choice? ---What do you have planned for the future of cognitive AI applications at Aigo? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/vosspeter/) --- [Website] (http://optimal.org/voss.html) Parting Question ---From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a>) with your story. Links --- [Aigo.ai] (https://aigo.ai/) --- [Artificial General Intelligence] (https://aigo.ai/what-is-real-agi/) --- [Cognitive AI] (https://aigo.ai/cognitive-ai/) --- [Knowledge Graph] (https://en.wikipedia.org/wiki/Knowledge_graph) --- [Causal Modeling] (https://en.wikipedia.org/wiki/Causal_model) --- [Bayesian Statistics] (https://en.wikipedia.org/wiki/Bayesian_statistics) --- [Thinking Fast & Slow] (https://amzn.to/3UJKsmK) by Daniel Kahneman (affiliate link) --- [Agent-Based Modeling] (https://en.wikipedia.org/wiki/Agent-based_model) --- [Reinforcement Learning] (https://en.wikipedia.org/wiki/Reinforcement_learning) --- [DARPA 3 Waves of AI] (https://www.darpa.mil/about-us/darpa-perspective-on-ai) presentation --- [Why Don't We Have AGI Yet?] (https://arxiv.org/abs/2308.03598) whitepaper --- [Concepts Is All You Need] (https://arxiv.org/abs/2309.01622) Whitepaper --- [Hellen Keller] (https://en.wikipedia.org/wiki/Helen_Keller) --- [Stephen Hawking] (https://en.wikipedia.org/wiki/Stephen_Hawking) The intro and outro music is from [Hitman's Lovesong feat. Paola Graziano] (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA 3.0] (https://creativecommons.org/licenses/by-sa/3.0/) ... Read more

05 May 2024

54 MINS

54:17

05 May 2024


#424

Build Your Second Brain One Piece At A Time

SummaryGenerative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to [dataengineeringpodcast.com/dagster] (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developers Interview ---Introduction ---How did you get involved in machine learning? ---Can you describe what Pieces is and the story behind it? ---The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives? ---model selections ---architecture of Pieces application ---local vs. hybrid vs. online models ---model update/delivery process ---data preparation/serving for models in context of Pieces app ---application of AI to developer workflows ---types of workflows that people are building with pieces ---What are the most interesting, innovative, or unexpected ways that you have seen Pieces used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces? ---When is Pieces the wrong choice? ---What do you have planned for the future of Pieces? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/tsavoknott/) Parting Question ---From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a>) with your story. Links --- [Pieces] (https://pieces.app/) --- [NPU == Neural Processing Unit] (https://en.wikipedia.org/wiki/AI_accelerator) --- [Tensor Chip] (https://en.wikipedia.org/wiki/Google_Tensor) --- [LoRA == Low Rank Adaptation] (https://github.com/microsoft/LoRA) --- [Generative Adversarial Networks] (https://en.wikipedia.org/wiki/Generative_adversarial_network) --- [Mistral] (https://mistral.ai/) --- [Emacs] (https://www.gnu.org/software/emacs/) --- [Vim] (https://www.vim.org/) --- [NeoVim] (https://neovim.io/) --- [Dart] (https://dart.dev/) --- [Flutter] (https://flutter.dev/) --- [Typescript] (https://www.typescriptlang.org/) --- [Lua] (https://www.lua.org/) --- [Retrieval Augmented Generation] (https://github.blog/2024-04-04-what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) --- [ONNX] (https://onnx.ai/) --- [LSTM == Long Short-Term Memory] (https://en.wikipedia.org/wiki/Long_short-term_memory) --- [LLama 2] (https://llama.meta.com/llama2/) --- [GitHub Copilot] (https://github.com/features/copilot) --- [Tabnine] (https://www.tabnine.com/) ------ [Podcast Episode] (https://www.themachinelearningpodcast.com/tabnine-generative-ai-developer-assistant-episode-24) The intro and outro music is from [Hitman's Lovesong feat. Paola Graziano] (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA 3.0] (https://creativecommons.org/licenses/by-sa/3.0/) ... Read more

28 Apr 2024

50 MINS

50:10

28 Apr 2024


#423

Making Email Better With AI At Shortwave

Summary ------- Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to [dataengineeringpodcast.com/dagster] (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Andrew Lee about his work on Shortwave, an AI powered email client Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Shortwave is and the story behind it? ------What is the core problem that you are addressing with Shortwave? ---Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic? ---What are the strengths that email maintains as a protocol and ecosystem? ---From a product perspective, what are the data challenges that are posed by email? ---Can you describe how you have architected the Shortwave platform? ------How have the design and goals of the product changed since you started it? ------What are the ways that the advent and evolution of language models have influenced your product roadmap? ---How do you manage the personalization of the AI functionality in your system for each user/team? ---For users and teams who are using Shortwave, how does it change their workflow and communication patterns? ---Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes? ---What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave? ---When is Shortwave the wrong choice? ---What do you have planned for the future of Shortwave? Contact Info ------------ --- [LinkedIn] (https://www.linkedin.com/in/startupandrew/) --- [Blog] (https://startupandrew.com/) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [Shortwave] (https://www.shortwave.com/) --- [Firebase] (https://firebase.google.com/) --- [Google Inbox] (https://en.wikipedia.org/wiki/Inbox_by_Gmail) --- [Hey] (https://www.hey.com/) ------ [Ezra Klein Hey Article] (https://www.nytimes.com/2024/04/07/opinion/gmail-email-digital-shame.html) --- [Superhuman] (https://superhuman.com/) --- [Pinecone] (https://www.pinecone.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/pinecone-vector-database-similarity-search-episode-189/) --- [Elastic] (https://www.elastic.co/) --- [Hybrid Search] (https://weaviate.io/blog/hybrid-search-explained) --- [Semantic Search] (https://en.wikipedia.org/wiki/Semantic_search) --- [Mistral] (https://mistral.ai/) --- [GPT 3.5] (https://platform.openai.com/docs/models/gpt-3-5-turbo) --- [IMAP] (https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - a data lake analytics platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, Starburst runs petabyte-scale SQL analytics fast at a fraction of the cost of traditional methods, helping you meet all your data needs ranging from AI/ML workloads to data applications to complete analytics. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Dagster] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) : [![Dagster Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/jz4xfquZ.png) Data teams are tasked with helping organizations deliver on the premise of data, and with ML and AI maturing rapidly, expectations have never been this high. However data engineers are challenged by both technical complexity and organizational complexity, with heterogeneous technologies to adopt, multiple data disciplines converging, legacy systems to support, and costs to manage. Dagster is an open-source orchestration solution that helps data teams reign in this complexity and build data platforms that provide unparalleled observability, and testability, all while fostering collaboration across the enterprise. With enterprise-grade hosting on Dagster Cloud, you gain even more capabilities, adding cost management, security, and CI support to further boost your teams' productivity. Go to [dagster.io](https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) today to get your first 30 days free!] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

21 Apr 2024

53 MINS

53:43

21 Apr 2024


#422

Designing A Non-Relational Database Engine

Summary ------- Databases come in a variety of formats for different use cases. The default association with the term "database" is relational engines, but non-relational engines are also used quite widely. In this episode Oren Eini, CEO and creator of RavenDB, explores the nuances of relational vs. non-relational engines, and the strategies for designing a non-relational database. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold&#39;s fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) . ---Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to [dataengineeringpodcast.com/dagster] (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Oren Eini about the work of designing and building a NoSQL database engine Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you describe what constitutes a NoSQL database? ------How have the requirements and applications of NoSQL engines changed since they first became popular ~15 years ago? ---What are the factors that convince teams to use a NoSQL vs. SQL database? ------NoSQL is a generalized term that encompasses a number of different data models. How does the underlying representation (e.g. document, K/V, graph) change that calculus? ---How have the evolution in data formats (e.g. N-dimensional vectors, point clouds, etc.) changed the landscape for NoSQL engines? ---When designing and building a database, what are the initial set of questions that need to be answered? ------How many "core capabilities" can you reasonably design around before they conflict with each other? ---How have you approached the evolution of RavenDB as you add new capabilities and mature the project? ------What are some of the early decisions that had to be unwound to enable new capabilities? ---If you were to start from scratch today, what database would you build? ---What are the most interesting, innovative, or unexpected ways that you have seen RavenDB/NoSQL databases used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on RavenDB? ---When is a NoSQL database/RavenDB the wrong choice? ---What do you have planned for the future of RavenDB? Contact Info ------------ --- [Blog] (https://ayende.com/blog/) --- [LinkedIn] (https://www.linkedin.com/in/ravendb/?originalSubdomain=il) Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [RavenDB] (https://ravendb.net/) --- [RSS] (https://en.wikipedia.org/wiki/RSS) --- [Object Relational Mapper (ORM)] (https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping) --- [Relational Database] (https://en.wikipedia.org/wiki/Relational_database) --- [NoSQL] (https://en.wikipedia.org/wiki/NoSQL) --- [CouchDB] (https://couchdb.apache.org/) --- [Navigational Database] (https://en.wikipedia.org/wiki/Navigational_database) --- [MongoDB] (https://www.mongodb.com/) --- [Redis] (https://redis.io/) --- [Neo4J] (https://neo4j.com/) --- [Cassandra] (https://cassandra.apache.org/_/index.html) --- [Column-Family] (https://en.wikipedia.org/wiki/Column_family) --- [SQLite] (https://www.sqlite.org/) --- [LevelDB] (https://github.com/google/leveldb) --- [Firebird DB] (https://firebirdsql.org/) --- [fsync] (https://man7.org/linux/man-pages/man2/fsync.2.html) --- [Esent DB?] (https://learn.microsoft.com/en-us/windows/win32/extensible-storage-engine/extensible-storage-engine-managed-reference) --- [KNN == K-Nearest Neighbors] (https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) --- [RocksDB] (https://rocksdb.org/) --- [C# Language] (https://en.wikipedia.org/wiki/C_Sharp_(programming_language)) --- [ASP.NET] (https://en.wikipedia.org/wiki/ASP.NET) --- [QUIC] (https://en.wikipedia.org/wiki/QUIC) --- [Dynamo Paper] (https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) --- [Database Internals] (https://amzn.to/49A5wjF) book (affiliate link) --- [Designing Data Intensive Applications] (https://amzn.to/3JgCZFh) book (affiliate link) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - a data lake analytics platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, Starburst runs petabyte-scale SQL analytics fast at a fraction of the cost of traditional methods, helping you meet all your data needs ranging from AI/ML workloads to data applications to complete analytics. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Datafold] (https://get.datafold.com/replication-de-podcast) : [![Datafold](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/zm6x2tFu.png) This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting https://get.datafold.com/replication-de-podcast.] (https://get.datafold.com/replication-de-podcast) --- [Dagster] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) : [![Dagster Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/jz4xfquZ.png) Data teams are tasked with helping organizations deliver on the premise of data, and with ML and AI maturing rapidly, expectations have never been this high. However data engineers are challenged by both technical complexity and organizational complexity, with heterogeneous technologies to adopt, multiple data disciplines converging, legacy systems to support, and costs to manage. Dagster is an open-source orchestration solution that helps data teams reign in this complexity and build data platforms that provide unparalleled observability, and testability, all while fostering collaboration across the enterprise. With enterprise-grade hosting on Dagster Cloud, you gain even more capabilities, adding cost management, security, and CI support to further boost your teams' productivity. Go to [dagster.io](https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) today to get your first 30 days free!] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

14 Apr 2024

1 HR 16 MINS

1:16:02

14 Apr 2024


#421

Establish A Single Source Of Truth For Your Data Consumers With A Semantic Layer

Summary ------- Maintaining a single source of truth for your data is the biggest challenge in data engineering. Different roles and tasks in the business need their own ways to access and analyze the data in the organization. In order to enable this use case, while maintaining a single point of access, the semantic layer has evolved as a technological solution to the problem. In this episode Artyom Keydunov, creator of Cube, discusses the evolution and applications of the semantic layer as a component of your data platform, and how Cube provides speed and cost optimization for your data consumers. Announcements ------------- ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold&#39;s fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) . ---Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to [dataengineeringpodcast.com/dagster] (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! ---Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to [dataengineeringpodcast.com/starburst] (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. ---Your host is Tobias Macey and today I&#39;m interviewing Artyom Keydunov about the role of the semantic layer in your data platform Interview --------- ---Introduction ---How did you get involved in the area of data management? ---Can you start by outlining the technical elements of what it means to have a "semantic layer"? ---In the past couple of years there was a rapid hype cycle around the "metrics layer" and "headless BI", which has largely faded. Can you give your assessment of the current state of the industry around the adoption/implementation of these concepts? ---What are the benefits of having a discrete service that offers the business metrics/semantic mappings as opposed to implementing those concepts as part of a more general system? (e.g. dbt, BI, warehouse marts, etc.) ------At what point does it become necessary/beneficial for a team to adopt such a service? ------What are the challenges involved in retrofitting a semantic layer into a production data system? ---evolution of requirements/usage patterns ---technical complexities/performance and cost optimization ---What are the most interesting, innovative, or unexpected ways that you have seen Cube used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Cube? ---When is Cube/a semantic layer the wrong choice? ---What do you have planned for the future of Cube? Contact Info ------------ --- [LinkedIn] (https://www.linkedin.com/in/keydunov/) --- [keydunov] (https://github.com/keydunov) on GitHub Parting Question ---------------- ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements --------------------- ---Thank you for listening! Don&#39;t forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [Machine Learning Podcast] (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you&#39;ve learned something or tried out a project from the show then tell us about it! Email [hosts@dataengineeringpodcast.com] (mailto:hosts@dataengineeringpodcast.com) ) with your story. Links ----- --- [Cube] (https://cube.dev/) --- [Semantic Layer] (https://en.wikipedia.org/wiki/Semantic_layer) --- [Business Objects] (https://en.wikipedia.org/wiki/BusinessObjects) --- [Tableau] (https://www.tableau.com/) --- [Looker] (https://cloud.google.com/looker/?hl=en) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/looker-with-daniel-mintz-episode-55/) --- [Mode] (https://mode.com/) --- [Thoughtspot] (https://www.thoughtspot.com/) --- [LightDash] (https://www.lightdash.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/lightdash-exploratory-business-intelligence-episode-232/) --- [Embedded Analytics] (https://en.wikipedia.org/wiki/Embedded_analytics) --- [Dimensional Modeling] (https://en.wikipedia.org/wiki/Dimensional_modeling) --- [Clickhouse] (https://clickhouse.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/) --- [Druid] (https://druid.apache.org/) --- [BigQuery] (https://cloud.google.com/bigquery?hl=en) --- [Starburst] (https://www.starburst.io/) --- [Pinot] (https://pinot.apache.org/) --- [Snowflake] (https://www.snowflake.com/en/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/snowflakedb-cloud-data-warehouse-episode-110/) --- [Arrow Datafusion] (https://arrow.apache.org/datafusion/) --- [Metabase] (https://www.metabase.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/metabase-with-sameer-al-sakran-episode-29) --- [Superset] (https://superset.apache.org/) --- [Alation] (https://www.alation.com/) --- [Collibra] (https://www.collibra.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/collibra-enterprise-data-governance-episode-188) --- [Atlan] (https://atlan.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/atlan-data-team-collaboration-episode-179) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) Sponsored By: --- [Starburst] (https://www.dataengineeringpodcast.com/starburst) : [![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - a data lake analytics platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, Starburst runs petabyte-scale SQL analytics fast at a fraction of the cost of traditional methods, helping you meet all your data needs ranging from AI/ML workloads to data applications to complete analytics. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)] (https://www.dataengineeringpodcast.com/starburst) --- [Datafold] (https://get.datafold.com/replication-de-podcast) : [![Datafold](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/zm6x2tFu.png) This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting https://get.datafold.com/replication-de-podcast.] (https://get.datafold.com/replication-de-podcast) --- [Dagster] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) : [![Dagster Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/jz4xfquZ.png) Data teams are tasked with helping organizations deliver on the premise of data, and with ML and AI maturing rapidly, expectations have never been this high. However data engineers are challenged by both technical complexity and organizational complexity, with heterogeneous technologies to adopt, multiple data disciplines converging, legacy systems to support, and costs to manage. Dagster is an open-source orchestration solution that helps data teams reign in this complexity and build data platforms that provide unparalleled observability, and testability, all while fostering collaboration across the enterprise. With enterprise-grade hosting on Dagster Cloud, you gain even more capabilities, adding cost management, security, and CI support to further boost your teams' productivity. Go to [dagster.io](https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) today to get your first 30 days free!] (https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) [Support Data Engineering Podcast] (https://dataengineering.supercast.com/) ... Read more

07 Apr 2024

56 MINS

56:23

07 Apr 2024