Data Engineering Podcast podcast

Data Engineering Podcast

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

 

#454

CSVs Will Never Die And OneSchema Is Counting On It

SummaryIn this episode of the Data Engineering Podcast Andrew Luo, CEO of OneSchema, talks about handling CSV data in business operations. Andrew shares his background in data engineering and CRM migration, which led to the creation of OneSchema, a platform designed to automate CSV imports and improve data validation processes. He discusses the challenges of working with CSVs, including inconsistent type representation, lack of schema information, and technical complexities, and explains how OneSchema addresses these issues using multiple CSV parsers and AI for data type inference and validation. Andrew highlights the business case for OneSchema, emphasizing efficiency gains for companies dealing with large volumes of CSV data, and shares plans to expand support for other data formats and integrate AI-driven transformation packs for specific industries.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today for the details. ---Your host is Tobias Macey and today I'm interviewing Andrew Luo about how OneSchema addresses the headaches of dealing with CSV data for your business Interview ---Introduction ---How did you get involved in the area of data management? ---Despite the years of evolution and improvement in data storage and interchange formats, CSVs are just as prevalent as ever. What are your opinions/theories on why they are so ubiquitous? ---What are some of the major sources of CSV data for teams that rely on them for business and analytical processes? ---The most obvious challenge with CSVs is their lack of type information, but they are notorious for having numerous other problems. What are some of the other major challenges involved with using CSVs for data interchange/ingestion? ---Can you describe what you are building at OneSchema and the story behind it? ------What are the core problems that you are solving, and for whom? ---Can you describe how you have architected your platform to be able to manage the variety, volume, and multi-tenancy of data that you process? ------How have the design and goals of the product changed since you first started working on it? ---What are some of the major performance issues that you have encountered while dealing with CSV data at scale? ---What are some of the most surprising things that you have learned about CSVs in the process of building OneSchema? ---What are the most interesting, innovative, or unexpected ways that you have seen OneSchema used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on OneSchema? ---When is OneSchema the wrong choice? ---What do you have planned for the future of OneSchema? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/andrewjluo/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [OneSchema] (https://oneschema.co/tobias) --- [EDI == Electronic Data Interchange] (https://en.wikipedia.org/wiki/Electronic_data_interchange) --- [UTF-8 BOM (Byte Order Mark) Characters] (https://en.wikipedia.org/wiki/Byte_order_mark) --- [SOAP] (https://en.wikipedia.org/wiki/SOAP) --- [CSV RFC] (https://www.ietf.org/rfc/rfc4180.txt) --- [Iceberg] (https://iceberg.apache.org/) --- [SSIS == SQL Server Integration Services] (https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) --- [MS Access] (https://www.microsoft.com/en-us/microsoft-365/access) --- [Datafusion] (https://datafusion.apache.org/) --- [JSON Schema] (https://json-schema.org/) --- [SFTP == Secure File Transfer Protocol] (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

13 Jan 2025

54 MINS

54:40

13 Jan 2025


#453

Breaking Down Data Silos: AI and ML in Master Data Management

SummaryIn this episode of the Data Engineering Podcast Dan Bruckner, co-founder and CTO of Tamr, talks about the application of machine learning (ML) and artificial intelligence (AI) in master data management (MDM). Dan shares his journey from working at CERN to becoming a data expert and discusses the challenges of reconciling large-scale organizational data. He explains how data silos arise from independent teams and highlights the importance of combining traditional techniques with modern AI to address the nuances of data reconciliation. Dan emphasizes the transformative potential of large language models (LLMs) in creating more natural user experiences, improving trust in AI-driven data solutions, and simplifying complex data management processes. He also discusses the balance between using AI for complex data problems and the necessity of human oversight to ensure accuracy and trust.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today for the details. ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world; like in their episode “The Secret Sauce Behind McDonald’s Data Strategy”, which digs into how AI-driven tools can be used to support crew efficiency and customer interactions. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Dan Bruckner about the application of ML and AI techniques to the challenge of reconciling data at the scale of business Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by giving an overview of the different ways that organizational data becomes unwieldy and needs to be consolidated and reconciled? ------How does that reconciliation relate to the practice of "master data management" ---What are the scaling challenges with the current set of practices for reconciling data? ---ML has been applied to data cleaning for a long time in the form of entity resolution, etc. How has the landscape evolved or matured in recent years? ------What (if any) transformative capabilities do LLMs introduce? ---What are the missing pieces/improvements that are necessary to make current AI systems usable out-of-the-box for data cleaning? ---What are the strategic decisions that need to be addressed when implementing ML/AI techniques in the data cleaning/reconciliation process? ---What are the risks involved in bringing ML to bear on data cleaning for inexperienced teams? ---What are the most interesting, innovative, or unexpected ways that you have seen ML techniques used in data resolution? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on using ML/AI in master data management? ---When is ML/AI the wrong choice for data cleaning/reconciliation? ---What are your hopes/predictions for the future of ML/AI applications in MDM and data cleaning? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/daniel-bruckner-35582a2a/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Tamr] (https://www.tamr.com/) --- [Master Data Management] (https://en.wikipedia.org/wiki/Master_data_management) --- [CERN] (https://home.cern/) --- [LHC] (https://home.cern/science/accelerators/large-hadron-collider) --- [Michael Stonebraker] (https://en.wikipedia.org/wiki/Michael_Stonebraker) --- [Conway's Law] (https://en.wikipedia.org/wiki/Conway%27s_law) --- [Expert Systems] (https://en.wikipedia.org/wiki/Expert_system) --- [Information Retrieval] (https://en.wikipedia.org/wiki/Information_retrieval) --- [Active Learning] (https://en.wikipedia.org/wiki/Active_learning_(machine_learning)) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

03 Jan 2025

57 MINS

57:30

03 Jan 2025


#452

Building a Data Vision Board: A Guide to Strategic Planning

SummaryIn this episode of the Data Engineering Podcast Lior Barak shares his insights on developing a three-year strategic vision for data management. He discusses the importance of having a strategic plan for data, highlighting the need for data teams to focus on impact rather than just enablement. He introduces the concept of a "data vision board" and explains how it can help organizations outline their strategic vision by considering three key forces: regulation, stakeholders, and organizational goals. Lior emphasizes the importance of balancing short-term pressures with long-term strategic goals, quantifying the cost of data issues to prioritize effectively, and maintaining the strategic vision as a living document through regular reviews. He encourages data teams to shift from being enablers to impact creators and provides practical advice on implementing a data vision board, setting clear KPIs, and embracing a product mindset to create tangible business impacts through strategic data management.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today to learn how Datafold can automate your migration and ensure source to target parity. ---Your host is Tobias Macey and today I'm interviewing Lior Barak about how to develop your three year strategic vision for data Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by giving an outline of the types of problems that occur as a result of not developing a strategic plan for an organization's data systems? ---What is the format that you recommend for capturing that strategic vision? ------What are the types of decisions and details that you believe should be included in a vision statement? ---Why is a 3 year horizon beneficial? What does that scale of time encourage/discourage in the debate and decision-making process? ---Who are the personas that should be included in the process of developing this strategy document? ---Can you walk us through the steps and processes involved in developing the data vision board for an organization? ---What are the time-frames or milestones that should lead to revisiting and revising the strategic objectives? ---What are the most interesting, innovative, or unexpected ways that you have seen a data vision strategy used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data strategy development? ---When is a data vision board the wrong choice? ---What are some additional resources or practices that you recommend teams invest in as a supplement to this strategic vision exercise? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/liorbarak/) --- [Substack] (https://cookingdata.substack.com/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [Vision Board Overview] (https://cookingdata.substack.com/p/wabi-sabi-your-data-crafting-an-imperfect) --- [Episode 397: Defining A Strategy For Your Data Products] (https://www.dataengineeringpodcast.com/data-product-strategy-episode-397) --- [Minto Pyramid Principle] (https://www.mckinsey.com/alumni/news-and-events/global-news/alumni-news/barbara-minto-mece-i-invented-it-so-i-get-to-say-how-to-pronounce-it) --- [KPI == Key Performance Indicator] (https://www.kpi.org/kpi-basics/) --- [OKR == Objectives and Key Results] (https://en.wikipedia.org/wiki/Objectives_and_key_results) --- [Phil Jackson: Eleven Rings] (https://amzn.to/3P93gYV) (affiliate link) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

23 Dec 2024

49 MINS

49:59

23 Dec 2024


#451

How Orchestration Impacts Data Platform Architecture

SummaryThe core task of data engineering is managing the flows of data through an organization. In order to ensure those flows are executing on schedule and without error is the role of the data orchestrator. Which orchestration engine you choose impacts the ways that you architect the rest of your data platform. In this episode Hugo Lu shares his thoughts as the founder of an orchestration company on how to think about data orchestration and data platform design as we navigate the current era of data engineering.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today to learn how Datafold can automate your migration and ensure source to target parity. ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world, from big picture questions like AI governance and data sharing to more nuanced questions like, how do we balance offense and defense in data management? In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Hugo Lu about the data platform and orchestration ecosystem and how to navigate the available options Interview ---Introduction ---How did you get involved in building data platforms? ---Can you describe what an orchestrator is in the context of data platforms? ------There are many other contexts in which orchestration is necessary. What are some examples of how orchestrators have adapted (or failed to adapt) to the times? ---What are the core features that are necessary for an orchestrator to have when dealing with data-oriented workflows? ---Beyond the bare necessities, what are some of the other features and design considerations that go into building a first-class dat platform or orchestration system? ---There have been several generations of orchestration engines over the past several years. How would you characterize the different coarse groupings of orchestration engines across those generational boundaries? ---How do the characteristics of a data orchestrator influence the overarching architecture of an organization's data platform/data operations? ------What about the reverse? ---How have the cycles of ML and AI workflow requirements impacted the design requirements for data orchestrators? ---What are the most interesting, innovative, or unexpected ways that you have seen data orchestrators used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data orchestration? ---When is an orchestrator the wrong choice? ---What are your predictions and/or hopes for the future of data orchestration? Contact Info --- [Medium] (https://medium.com/%40hugolu87) --- [LinkedIn] (https://www.linkedin.com/in/hugo-lu-confirmed/) Parting Question ---From your perspective, what is the biggest thing data teams are missing in the technology today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Orchestra] (https://www.getorchestra.io/) --- [Previous Episode: Overview Of The State Of Data Orchestration] (https://markdowntohtml.com/https://www.dataengineeringpodcast.com/state-of-data-orchestration-episode-391) --- [Cron] (https://en.wikipedia.org/wiki/Cron) --- [ArgoCD] (https://argo-cd.readthedocs.io/en/stable/) --- [DAG] (https://en.wikipedia.org/wiki/Directed_acyclic_graph) --- [Kubernetes] (https://kubernetes.io/) --- [Data Mesh] (https://www.datamesh-architecture.com/) --- [Airflow] (https://airflow.apache.org/) --- [SSIS == SQL Server Integration Services] (https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver16) --- [Pentaho] (https://pentaho.com/) --- [Kettle] (https://pentaho.com/products/pentaho-data-integration/) --- [DataVolo] (https://datavolo.io/) --- [NiFi] (https://nifi.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/nifi-with-kevin-doran-and-andy-lopresto-episode-39) --- [Dagster] (https://dagster.io/) --- [gRPC] (https://grpc.io/) --- [Coalesce] (https://coalesce.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/coalesce-enterprise-analytics-transformations-episode-278) --- [dbt] (https://www.getdbt.com/) --- [DataHub] (https://datahubproject.io/) --- [Palantir] (https://www.palantir.com/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

16 Dec 2024

59 MINS

59:39

16 Dec 2024


#450

An Exploration Of The Impediments To Reusable Data Pipelines

SummaryIn this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today for the details. ---Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelines Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context? ------What are some ways that you think about what constitutes a "component" in this context? ---The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners? ---What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering? ------How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse? ---In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams? ---When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic? ---Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements? ---low/no code systems for code reuse ---impact of LLMs on reusability/composition ---impact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.) ---polymorphic data models (e.g. activity schema) ---What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities? ---What are your hopes and predictions for sharing of code and logic in the future of data engineering? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/maximebeauchemin/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Max's Blog Post] (https://preset.io/blog/why-data-teams-keep-reinventing-the-wheel/) --- [Airflow] (https://airflow.apache.org/) --- [Superset] (https://superset.apache.org/) --- [Tableau] (https://www.tableau.com/) --- [Looker] (https://cloud.google.com/looker/?hl=en) --- [PowerBI] (https://www.microsoft.com/en-us/power-platform/products/power-bi) --- [Cohort Analysis] (https://en.wikipedia.org/wiki/Cohort_analysis) --- [NextJS] (https://nextjs.org/) --- [Airbyte] (https://airbyte.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/airbyte-stable-release-episode-439) --- [Fivetran] (https://www.fivetran.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/fivetran-data-replication-episode-93/) --- [Segment] (https://segment.com/) --- [dbt] (https://www.getdbt.com/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [Spark] (https://spark.apache.org/) --- [LAMP Stack] (https://aws.amazon.com/what-is/lamp-stack/) --- [PHP] (https://www.php.net/) --- [Relational Algebra] (https://en.wikipedia.org/wiki/Relational_algebra) --- [Knowledge Graph] (https://en.wikipedia.org/wiki/Knowledge_graph) --- [Python Marshmallow] (https://marshmallow.readthedocs.io/en/stable/) --- [Data Warehouse Lifecycle Toolkit] (https://amzn.to/4f99suH) (affiliate link) --- [Entity Centric Data Modeling] (https://preset.io/blog/introducing-entity-centric-data-modeling-for-analytics/) Blog Post --- [Amplitude] (https://amplitude.com/) --- [OSACon] (https://osacon.io/sessions/2024/ai-reality-checkpoint-the-good-the-bad-and-the-overhyped/) presentation --- [ol-data-platform] (https://github.com/mitodl/ol-data-platform) Tobias' team's data platform code The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

08 Dec 2024

51 MINS

51:32

08 Dec 2024


#449

The Art of Database Selection and Evolution

SummaryIn this episode of the Data Engineering Podcast Sam Kleinman talks about the pivotal role of databases in software engineering. Sam shares his journey into the world of data and discusses the complexities of database selection, highlighting the trade-offs between different database architectures and how these choices affect system design, query performance, and the need for ETL processes. He emphasizes the importance of understanding specific requirements to choose the right database engine and warns against over-engineering solutions that can lead to increased complexity. Sam also touches on the tendency of engineers to move logic to the application layer due to skepticism about database longevity and advises teams to leverage database capabilities instead. Finally, he identifies a significant gap in data management tooling: the lack of easy-to-use testing tools for database interactions, highlighting the need for better testing paradigms to ensure reliability and reduce bugs in data-driven applications.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today to learn how Datafold can automate your migration and ensure source to target parity. ---Your host is Tobias Macey and today I'm interviewing Sam Kleinman about database tradeoffs across operating environments and axes of scale Interview ---Introduction ---How did you get involved in the area of data management? ---The database engine you use has a substantial impact on how you architect your overall system. When starting a greenfield project, what do you see as the most important factor to consider when selecting a database? ---points of friction introduced by database capabilities ---embedded databases (e.g. SQLite, DuckDB, LanceDB), when to use and when do they become a bottleneck ---single-node database engines (e.g. Postgres, MySQL), when are they legitimately a problem ---distributed databases (e.g. CockroachDB, PlanetScale, MongoDB) ---polyglot storage vs. general-purpose/multimodal databases ---federated queries, benefits and limitations ------ease of integration vs. variability of performance and access control Contact Info --- [LinkedIn] (https://www.linkedin.com/in/samkleinman/) --- [GitHub] (https://github.com/tychoish) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [MongoDB] (https://www.mongodb.com/) --- [Neon] (https://neon.tech/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/neon-serverless-postgres-episode-433) --- [GlareDB] (https://glaredb.com/) --- [NoSQL] (https://en.wikipedia.org/wiki/NoSQL) --- [S3 Conditional Write] (https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/) --- [Event driven architecture] (https://en.wikipedia.org/wiki/Event-driven_architecture) --- [CockroachDB] (https://www.cockroachlabs.com/) --- [Couchbase] (https://www.couchbase.com/) --- [Cassandra] (https://cassandra.apache.org/_/index.html) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

01 Dec 2024

59 MINS

59:56

01 Dec 2024


#448

Bridging Code and UI in Data Orchestration with Kestra

SummaryIn this episode of the Data Engineering Podcast, Anna Geller talks about the integration of code and UI-driven interfaces for data orchestration. Anna defines data orchestration as automating the coordination of workflow nodes that interact with data across various business functions, discussing how it goes beyond ETL and analytics to enable real-time data processing across different internal systems. She explores the challenges of using existing scheduling tools for data-specific workflows, highlighting limitations and anti-patterns, and discusses Kestra's solution, a low-code orchestration platform that combines code-driven flexibility with UI-driven simplicity. Anna delves into Kestra's architectural design, API-first approach, and pluggable infrastructure, and shares insights on balancing UI and code-driven workflows, the challenges of open-core business models, and innovative user applications of Kestra's platform.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today for the details. ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to Data Citizens Dialogues on Apple, Spotify, Youtube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Anna Geller about incorporating both code and UI driven interfaces for data orchestration Interview ---Introduction ---How did you get involved in the area of data management? ---Can you start by sharing a definition of what constitutes "data orchestration"? ---There are many orchestration and scheduling systems that exist in other contexts (e.g. CI/CD systems, Kubernetes, etc.). Those are often adapted to data workflows because they already exist in the organizational context. What are the anti-patterns and limitations that approach introduces in data workflows? ------What are the problems that exist in the opposite direction of using data orchestrators for CI/CD, etc.? ---Data orchestrators have been around for decades, with many different generations and opinions about how and by whom they are used. What do you see as the main motivation for UI vs. code-driven workflows? ---What are the benefits of combining code-driven and UI-driven capabilities in a single orchestrator? ------What constraints does it necessitate to allow for interoperability between those modalities? ---Data Orchestrators need to integrate with many external systems. How does Kestra approach building integrations and ensure governance for all their underlying configurations? ---Managing workflows at scale across teams can be challenging in terms of providing structure and visibility of dependencies across workflows and teams. What features does Kestra offer so that all pipelines and teams stay organised? ---What are the most interesting, innovative, or unexpected ways that you have seen Kestra used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Kestra? ---When is Kestra the wrong choice? ---What do you have planned for the future of Kestra? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/anna-geller-12a86811a/) --- [Blog] (https://annageller.medium.com/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Kestra] (https://kestra.io/) --- [CI/CD] (https://en.wikipedia.org/wiki/CI/CD) --- [State Machine] (https://en.wikipedia.org/wiki/Finite-state_machine) --- [AWS Lambda] (https://aws.amazon.com/lambda/) --- [GitHub Actions] (https://github.com/features/actions) --- [ECS Fargate] (https://aws.amazon.com/fargate/) --- [Airflow] (https://airflow.apache.org/) --- [Kafka] (https://kafka.apache.org/) --- [Elasticsearch] (https://www.elastic.co/) --- [Airflow XCom] (https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/xcoms.html) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) In this episode of the Data Engineering Podcast, host Tobias Macy interviews Anna Geller, a data engineer turned product manager, about the integration of code and UI-driven interfaces for data orchestration. Anna shares her journey from working with data during an internship at KPMG to her current role as a product lead at Kestra. She provides her insights into the concept of data orchestration, emphasizing its broader scope beyond just ETL and analytics, and discusses the challenges and anti-patterns that arise when using existing scheduling systems for data-specific workflows.Anna explains the overlap between CI/CD, scheduling, and orchestration tools, and the limitations that occur when these tools are used for data workflows. She highlights the importance of visibility and governance at scale and the need for a dedicated orchestrator like Kestra. The conversation also delves into the challenges of using data orchestrators for non-data workflows and the benefits of combining code and UI-driven approaches.Anna discusses Kestra's architecture, which supports both JDBC and Kafka backends, and its focus on API-first interactions. She explains how Kestra handles task granularity, inputs, and outputs, and the flexibility provided by its plugin system. The episode also explores Kestra's approach to data as assets, the target audience for Kestra, and how it bridges different workflows across organizational boundaries.The discussion touches on Kestra's open-core model, the challenges of balancing open-source and enterprise features, and the innovative ways Kestra is being applied. Anna shares insights into Kestra's local development experience, the lessons learned in building the product, and the upcoming features and projects that Kestra is excited to explore. ... Read more

26 Nov 2024

44 MINS

44:30

26 Nov 2024


#447

Streaming Data Into The Lakehouse With Iceberg And Trino At Going

In this episode, I had the pleasure of speaking with Ken Pickering, VP of Engineering at Going, about the intricacies of streaming data into a Trino and Iceberg lakehouse. Ken shared his journey from product engineering to becoming deeply involved in data-centric roles, highlighting his experiences in ecommerce and InsurTech. At Going, Ken leads the data platform team, focusing on finding travel deals for consumers, a task that involves handling massive volumes of flight data and event stream information.Ken explained the dual approach of passive and active search strategies used by Going to manage the vast data landscape. Passive search involves aggregating data from global distribution systems, while active search is more transactional, querying specific flight prices. This approach helps Going sift through approximately 50 petabytes of data annually to identify the best travel deals.We delved into the technical architecture supporting these operations, including the use of Confluent for data streaming, Starburst Galaxy for transformation, and Databricks for modeling. Ken emphasized the importance of an open lakehouse architecture, which allows for flexibility and scalability as the business grows.Ken also discussed the composition of Going's engineering and data teams, highlighting the collaborative nature of their work and the reliance on vendor tooling to streamline operations. He shared insights into the challenges and strategies of managing data life cycles, ensuring data quality, and maintaining uptime for consumer-facing applications.Throughout our conversation, Ken provided a glimpse into the future of Going's data architecture, including potential expansions into other travel modes and the integration of large language models for enhanced customer interaction. This episode offers a comprehensive look at the complexities and innovations in building a data-driven travel advisory service. ... Read more

18 Nov 2024

39 MINS

39:49

18 Nov 2024


#446

An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin

SummaryThe challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Bruin is and the story behind it? ------Who is your target audience? ---There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users? ---How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows? ------How might it act as a limiting factor for organizational involvement? ---Can you describe how Bruin is designed? ------How have the design and scope of Bruin evolved since you first started working on it? ---You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality? ------What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows? ---What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems? ---Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr? ---What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities? ---What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin? ---When is Bruin the wrong choice? ---What do you have planned for the future of Bruin? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/burakkarakan/?originalSubdomain=de) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Bruin] (https://getbruin.com/) --- [Fivetran] (https://www.fivetran.com/) --- [Stitch] (https://www.stitchdata.com/) --- [Ingestr] (https://github.com/bruin-data/ingestr) --- [Bruin CLI] (https://github.com/bruin-data/bruin) --- [Meltano] (https://meltano.com/) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [dbt] (https://www.getdbt.com/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [SDF] (https://www.sdf.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sdf-fast-and-expressive-sql-transformation-episode-440) --- [Airflow] (https://airflow.apache.org/) --- [Dagster] (https://dagster.io/) --- [Snowpark] (https://www.snowflake.com/en/data-cloud/snowpark/) --- [Atlan] (https://atlan.com/) --- [Evidence] (https://evidence.dev/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

11 Nov 2024

56 MINS

56:11

11 Nov 2024


#445

Feldera: Bridging Batch and Streaming with Incremental Computation

SummaryIn this episode of the Data Engineering Podcast, the creators of Feldera talk about their incremental compute engine designed for continuous computation of data, machine learning, and AI workloads. The discussion covers the concept of incremental computation, the origins of Feldera, and its unique ability to handle both streaming and batch data seamlessly. The guests explore Feldera's architecture, applications in real-time machine learning and AI, and challenges in educating users about incremental computation. They also discuss the balance between open-source and enterprise offerings, and the broader implications of incremental computation for the future of data management, predicting a shift towards unified systems that handle both batch and streaming data efficiently.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to [Data Citizens Dialogues] (https://www.collibra.com/podcasts) on Apple, Spotify, Youtube, or wherever you get your podcasts. ---Your host is Tobias Macey and today I'm interviewing Leonid Ryzhyk, Lalith Suresh, and Mihai Budiu about Feldera, an incremental compute engine for continous computation of data, ML, and AI workloads Interview ---Introduction ---Can you describe what Feldera is and the story behind it? ---DBSP (the theory behind Feldera) has won multiple awards from the database research community. Can you explain what it is and how it solves the incremental computation problem? ---Depending on which angle you look at it, Feldera has attributes of data warehouses, federated query engines, and stream processors. What are the unique use cases that Feldera is designed to address? ------In what situations would you replace another technology with Feldera? ------When is it an additive technology? ---Can you describe the architecture of Feldera? ------How have the design and scope evolved since you first started working on it? ---What are the state storage interfaces available in Feldera? ------What are the opportunities for integrating with or building on top of open table formats like Iceberg, Lance, Hudi, etc.? ---Can you describe a typical workflow for an engineer building with Feldera? ---You advertise Feldera's utility in ML and AI use cases in addition to data management. What are the features that make it conducive to those applications? ---What is your philosophy toward the community growth and engagement with the open source aspects of Feldera and how you're balancing that with sustainability of the project and business? ---What are the most interesting, innovative, or unexpected ways that you have seen Feldera used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on Feldera? ---When is Feldera the wrong choice? ---What do you have planned for the future of Feldera? Contact Info ---Leonid ------ [Website] (https://ryzhyk.net/) ------ [GitHub] (https://github.com/ryzhyk) ------ [LinkedIn] (https://www.linkedin.com/in/leonid-ryzhyk-0ba031b9/) ---Lalith ------ [LinkedIn] (https://www.linkedin.com/in/lalith-suresh-34bb8911/) ------ [Website] (https://lalith.in/research/) ---Mihai ------ [Website] (https://mihaibudiu.github.io/work/index.html) ------ [GitHub] (https://github.com/mihaibudiu) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links --- [Feldera] (https://www.feldera.com/) ------ [GitHub] (https://github.com/feldera/feldera) --- [DBSP] (https://arxiv.org/abs/2203.16684) paper ------ [Rust Crate] (https://docs.rs/dbsp/latest/dbsp/) --- [Differential Dataflow] (https://timelydataflow.github.io/differential-dataflow/) --- [Trino] (https://trino.io/) --- [Flink] (https://flink.apache.org/) --- [Spark] (https://spark.apache.org/) --- [Materialize] (https://materialize.com/) --- [Clickhouse] (https://clickhouse.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/clickhouse-data-warehouse-episode-88/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [Snowflake] (https://www.snowflake.com) --- [Arrow] (https://arrow.apache.org/) --- [Substrait] (https://substrait.io/) --- [DataFusion] (https://datafusion.apache.org/) --- [DSP == Digital Signal Processing] (https://en.wikipedia.org/wiki/Digital_signal_processing) --- [CDC == Change Data Capture] (https://en.wikipedia.org/wiki/Change_data_capture) --- [PRQL] (https://prql-lang.org/) --- [LSM (Log-Structured Merge) Tree] (https://en.wikipedia.org/wiki/Log-structured_merge-tree) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [Open VSwitch] (https://www.openvswitch.org/) --- [Feature Engineering] (https://en.wikipedia.org/wiki/Feature_engineering) --- [Calcite] (https://calcite.apache.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

04 Nov 2024

47 MINS

47:36

04 Nov 2024


#444

Accelerate Migration Of Your Data Warehouse with Datafold's AI Powered Migration Agent

SummaryGleb Mezhanskiy, CEO and co-founder of DataFold, joins Tobias Macey to discuss the challenges and innovations in data migrations. Gleb shares his experiences building and scaling data platforms at companies like Autodesk and Lyft, and how these experiences inspired the creation of DataFold to address data quality issues across teams. He outlines the complexities of data migrations, including common pitfalls such as technical debt and the importance of achieving parity between old and new systems. Gleb also discusses DataFold's innovative use of AI and large language models (LLMs) to automate translation and reconciliation processes in data migrations, reducing time and effort required for migrations.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm welcoming back Gleb Mezhanskiy to talk about Datafold's experience bringing AI to bear on the problem of migrating your data stack Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what the Data Migration Agent is and the story behind it? ------What is the core problem that you are targeting with the agent? ---What are the biggest time sinks in the process of database and tooling migration that teams run into? ---Can you describe the architecture of your agent? ------What was your selection and evaluation process for the LLM that you are using? ---What were some of the main unknowns that you had to discover going into the project? ------What are some of the evolutions in the ecosystem that occurred either during the development process or since your initial launch that have caused you to second-guess elements of the design? ---In terms of SQL translation there are libraries such as SQLGlot and the work being done with SDF that aim to address that through AST parsing and subsequent dialect generation. What are the ways that approach is insufficient in the context of a platform migration? ---How does the approach you are taking with the combination of data-diffing and automated translation help build confidence in the migration target? ---What are the most interesting, innovative, or unexpected ways that you have seen the Data Migration Agent used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI powered migration assistant? ---When is the data migration agent the wrong choice? ---What do you have planned for the future of applications of AI at Datafold? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/glebmezh/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [Datafold] (https://www.datafold.com/) --- [Datafold Migration Agent] (https://www.datafold.com/data-migration) --- [Datafold data-diff] (https://www.datafold.com/data-diff) --- [Datafold Reconciliation Podcast Episode] (https://www.dataengineeringpodcast.com/datafold-database-reconciliation-episode-417) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [Lark] (https://github.com/lark-parser/lark) parser --- [Claude 3.5 Sonnet] (https://www.anthropic.com/news/claude-3-5-sonnet) --- [Looker] (https://cloud.google.com/looker/?hl=en) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/looker-with-daniel-mintz-episode-55) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

27 Oct 2024

48 MINS

48:50

27 Oct 2024


#443

Bring Vector Search And Storage To The Data Lake With Lance

SummaryThe rapid growth of generative AI applications has prompted a surge of investment in vector databases. While there are numerous engines available now, Lance is designed to integrate with data lake and lakehouse architectures. In this episode Weston Pace explains the inner workings of the Lance format for table definitions and file storage, and the optimizations that they have made to allow for fast random access and efficient schema evolution. In addition to integrating well with data lakes, Lance is also a first-class participant in the Arrow ecosystem, making it easy to use with your existing ML and AI toolchains. This is a fascinating conversation about a technology that is focused on expanding the range of options for working with vector data.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Weston Pace about the Lance file and table format for column-oriented vector storage Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what Lance is and the story behind it? ------What are the core problems that Lance is designed to solve? ---------What is explicitly out of scope? ---The README mentions that it is straightforward to convert to Lance from Parquet. What is the motivation for this compatibility/conversion support? ------What formats does Lance replace or obviate? ---In terms of data modeling Lance obviously adds a vector type, what are the features and constraints that engineers should be aware of when modeling their embeddings or arbitrary vectors? ------Are there any practical or hard limitations on vector dimensionality? ---When generating Lance files/datasets, what are some considerations to be aware of for balancing file/chunk sizes for I/O efficiency and random access in cloud storage? ---I noticed that the file specification has space for feature flags. How has that aided in enabling experimentation in new capabilities and optimizations? ---What are some of the engineering and design decisions that were most challenging and/or had the biggest impact on the performance and utility of Lance? ---The most obvious interface for reading and writing Lance files is through LanceDB. Can you describe the use cases that it focuses on and its notable features? ------What are the other main integrations for Lance? ------What are the opportunities or roadblocks in adding support for Lance and vector storage/indexes in e.g. Iceberg or Delta to enable its use in data lake environments? ---What are the most interesting, innovative, or unexpected ways that you have seen Lance used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on the Lance format? ---When is Lance the wrong choice? ---What do you have planned for the future of Lance? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/weston-pace-cool-dude/) --- [GitHub] (https://github.com/westonpace) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Links --- [Lance Format] (https://lancedb.github.io/lance/) --- [LanceDB] (https://lancedb.github.io/lancedb/) --- [Substrait] (https://substrait.io/) --- [PyArrow] (https://arrow.apache.org/docs/python/index.html) --- [FAISS] (https://github.com/facebookresearch/faiss) --- [Pinecone] (https://www.pinecone.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/pinecone-vector-database-similarity-search-episode-189/) --- [Parquet] (https://parquet.apache.org/) --- [Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [PyLance] (https://github.com/lancedb/lance/tree/main/python) --- [Hilbert Curves] (https://en.wikipedia.org/wiki/Hilbert_curve) --- [SIFT Vectors] (https://en.wikipedia.org/wiki/Scale-invariant_feature_transform) --- [S3 Express] (https://aws.amazon.com/s3/storage-classes/express-one-zone/) --- [Weka] (https://www.weka.io/) --- [DataFusion] (https://datafusion.apache.org/) --- [Ray Data] (https://www.ray.io/) --- [Torch Data Loader] (https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#preparing-your-data-for-training-with-dataloaders) --- [HNSW == Hierarchical Navigable Small Worlds] (https://lancedb.github.io/lancedb/concepts/index_hnsw/) vector index --- [IVFPQ] (https://lancedb.github.io/lancedb/concepts/index_ivfpq/) vector index --- [GeoJSON] (https://geojson.org/) --- [Polars] (https://docs.pola.rs/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

20 Oct 2024

58 MINS

58:01

20 Oct 2024


#442

The Role of Python in Shaping the Future of Data Platforms with DLT

SummaryIn this episode of the Data Engineering Podcast, Adrian Broderieux and Marcin Rudolph, co-founders of DLT Hub, delve into the principles guiding DLT's development, emphasizing its role as a library rather than a platform, and its integration with lakehouse architectures and AI application frameworks. The episode explores the impact of the Python ecosystem's growth on DLT, highlighting integrations with high-performance libraries and the benefits of Arrow and DuckDB. The episode concludes with a discussion on the future of DLT, including plans for a portable data lake and the importance of interoperability in data management tools.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Adrian Brudaru and Marcin Rudolf, cofounders at dltHub, about the growth of dlt and the numerous ways that you can use it to address the complexities of data integration Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what dlt is and how it has evolved since we last spoke (September 2023)? ------What are the core principles that guide your work on dlt and dlthub? ---You have taken a very opinionated stance against managed extract/load services. What are the shortcomings of those platforms, and when would you argue in their favor? ---The landscape of data movement has undergone some interesting changes over the past year. Most notably, the growth of PyAirbyte and the rapid shifts around the needs of generative AI stacks (vector stores, unstructured data processing, etc.). How has that informed your product development and positioning? ------The Python ecosystem, and in particular data-oriented Python, has also undergone substantial evolution. What are the developments in the libraries and frameworks that you have been able to benefit from? ---What are some of the notable investments that you have made in the developer experience for building dlt pipelines? ------How have the interfaces for source/destination development improved? ---You recently published a post about the idea of a portable data lake. What are the missing pieces that would make that possible, and what are the developments/technologies that put that idea within reach? ---What is your strategy for building a sustainable product on top of dlt? ------How does that strategy help to form a "virtuous cycle" of improving the open source foundation? ---What are the most interesting, innovative, or unexpected ways that you have seen dlt used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on dlt? ---When is dlt the wrong choice? ---What do you have planned for the future of dlt/dlthub? Contact Info ---Adrian ------ [LinkedIn] (https://www.linkedin.com/in/data-team/?originalSubdomain=de) ---Marcin ------ [LinkedIn] (https://www.linkedin.com/in/marcinrudolf/?originalSubdomain=de) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements ---Thank you for listening! Don't forget to check out our other shows. [Podcast.__init__] (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The [AI Engineering Podcast] (https://www.aiengineeringpodcast.com) is your guide to the fast-moving world of building AI systems. ---Visit the [site] (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. ---If you've learned something or tried out a project from the show then tell us about it! Email <a target="_blank">hosts@dataengineeringpodcast.com</a> with your story. Links --- [dlt] (dlthub.com) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/dlt-data-integration-library-episode-390) --- [PyArrow] (https://arrow.apache.org/docs/python/) --- [Polars] (https://docs.pola.rs/) --- [Ibis] (https://ibis-project.org/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [dlt Data Contracts] (https://dlthub.com/docs/general-usage/schema-contracts) --- [RAG == Retrieval Augmented Generation] (https://github.blog/ai-and-ml/generative-ai/what-is-retrieval-augmented-generation-and-what-does-it-do-for-generative-ai/) ------ [AI Engineering Podcast Episode] (https://www.aiengineeringpodcast.com/retrieval-augmented-generation-implementation-episode-34) --- [PyAirbyte] (https://docs.airbyte.com/using-airbyte/pyairbyte/getting-started) --- [OpenAI o1 Model] (https://openai.com/o1/) --- [LanceDB] (https://lancedb.com/) --- [QDrant Embedded] (https://qdrant.tech/) --- [Airflow] (https://airflow.apache.org/) --- [GitHub Actions] (https://github.com/features/actions) --- [Arrow DataFusion] (https://datafusion.apache.org/) --- [Apache Arrow] (https://arrow.apache.org/) --- [PyIceberg] (https://py.iceberg.apache.org/) --- [Delta-RS] (https://github.com/delta-io/delta-rs) --- [SCD2 == Slowly Changing Dimensions] (https://dlthub.com/docs/general-usage/incremental-loading#scd2-strategy) --- [SQLAlchemy] (https://www.sqlalchemy.org/) --- [SQLGlot] (https://github.com/tobymao/sqlglot) --- [FSSpec] (https://github.com/fsspec/) --- [Pydantic] (https://docs.pydantic.dev/latest/) --- [Spacy] (https://spacy.io/) --- [Entity Recognition] (https://en.wikipedia.org/wiki/Named-entity_recognition) --- [Parquet File Format] (https://parquet.apache.org/) --- [Python Decorator] (https://book.pythontips.com/en/latest/decorators.html) --- [REST API Toolkit] (https://dlthub.com/blog/rest-api-source-client) --- [OpenAPI Connector Generator] (https://dlthub.com/docs/dlt-ecosystem/verified-sources/openapi-generator) --- [ConnectorX] (https://github.com/sfu-db/connector-x) --- [Python no-GIL] (https://www.blog.pythonlibrary.org/2024/03/14/python-3-13-allows-disabling-of-the-gil-subinterpreters/) --- [Delta Lake] (https://delta.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [Hamilton] (https://github.com/DAGWorks-Inc/hamilton) --- [Tabular] (https://www.tabular.io/) --- [PostHog] (https://posthog.com/) ------ [Podcast.__init__ Episode] (https://www.pythonpodcast.com/episodepage/open-source-product-analytics-with-posthog) --- [AsyncIO] (https://docs.python.org/3/library/asyncio.html) --- [Cursor.AI] (https://www.cursor.com/) --- [Data Mesh] (https://www.datamesh-architecture.com/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/episodepage/straining-your-data-lake-through-a-data-mesh) --- [FastAPI] (https://fastapi.tiangolo.com/) --- [LangChain] (https://www.langchain.com/) --- [GraphRAG] (https://neo4j.com/blog/graphrag-manifesto/) ------ [AI Engineering Podcast Episode] (https://www.aiengineeringpodcast.com/graphrag-knowledge-graph-semantic-retrieval-episode-37) --- [Property Graph] (https://en.wikipedia.org/wiki/Property_graph) --- [Python uv] (https://docs.astral.sh/uv/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

13 Oct 2024

54 MINS

54:08

13 Oct 2024


#441

Build Your Data Transformations Faster And Safer With SDF

SummaryIn this episode of the Data Engineering Podcast Lukas Schulte, co-founder and CEO of SDF, explores the development and capabilities of this fast and expressive SQL transformation tool. From its origins as a solution for addressing data privacy, governance, and quality concerns in modern data management, to its unique features like static analysis and type correctness, Lucas dives into what sets SDF apart from other tools like DBT and SQL Mesh. Tune in for insights on building a business around a developer tool, the importance of community and user experience in the data engineering ecosystem, and plans for future development, including supporting Python models and enhancing execution capabilities.Announcements ---Hello and welcome to the Data Engineering Podcast, the show about modern data management ---Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at [dataengineeringpodcast.com/datafold] (https://www.dataengineeringpodcast.com/datafold) today! ---Your host is Tobias Macey and today I'm interviewing Lukas Schulte about SDF, a fast and expressive SQL transformation tool that understands your schema Interview ---Introduction ---How did you get involved in the area of data management? ---Can you describe what SDF is and the story behind it? ------What's the story behind the name? ---What problem are you solving with SDF? ------dbt has been the dominant player for SQL-based transformations for several years, with other notable competition in the form of SQLMesh. Can you give an overview of the venn diagram for features and functionality across SDF, dbt and SQLMesh? ---Can you describe the design and implementation of SDF? ------How have the scope and goals of the project changed since you first started working on it? ---What does the development experience look like for a team working with SDF? ------How does that differ between the open and paid versions of the product? ---What are the features and functionality that SDF offers to address intra- and inter-team collaboration? ---One of the challenges for any second-mover technology with an established competitor is the adoption/migration path for teams who have already invested in the incumbent (dbt in this case). How are you addressing that barrier for SDF? ------Beyond the core migration path of the direct functionality of the incumbent product is the amount of tooling and communal knowledge that grows up around that product. How are you thinking about that aspect of the current landscape? ---What is your governing principle for what capabilities are in the open core and which go in the paid product? ---What are the most interesting, innovative, or unexpected ways that you have seen SDF used? ---What are the most interesting, unexpected, or challenging lessons that you have learned while working on SDF? ---When is SDF the wrong choice? ---What do you have planned for the future of SDF? Contact Info --- [LinkedIn] (https://www.linkedin.com/in/lukas-schulte-a6b16254/) Parting Question ---From your perspective, what is the biggest gap in the tooling or technology for data management today? Links --- [SDF] (https://www.sdf.com/) --- [Semantic Data Warehouse] (https://www.datacamp.com/blog/semantic-layer) --- [asdf-vm] (https://asdf-vm.com/) --- [dbt] (https://www.getdbt.com/) --- [Software Linting] (https://en.wikipedia.org/wiki/Lint_(software) ) --- [SQLMesh] (https://sqlmesh.readthedocs.io/en/stable/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/sqlmesh-open-source-dataops-episode-380) --- [Coalesce] (https://coalesce.io/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/coalesce-enterprise-analytics-transformations-episode-278) --- [Apache Iceberg] (https://iceberg.apache.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) --- [DuckDB] (https://duckdb.org/) ------ [Podcast Episode] (https://www.dataengineeringpodcast.com/duckdb-in-process-olap-database-episode-270/) --- [SDF Classifiers] (https://docs.sdf.com/guide/basics/classifiers) --- [dbt Semantic Layer] (https://docs.getdbt.com/docs/build/semantic-models) --- [dbt expectations] (https://hub.getdbt.com/calogica/dbt_expectations/latest/) --- [Apache Datafusion] (https://datafusion.apache.org/) --- [Ibis] (https://ibis-project.org/) The intro and outro music is from [The Hug] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by [The Freak Fandango Orchestra] (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / [CC BY-SA] (http://creativecommons.org/licenses/by-sa/3.0/) ... Read more

06 Oct 2024

42 MINS

42:36

06 Oct 2024