We love building things with data and want to remove as much friction as possible for others building with bit.io.
This section provides a growing collection of walkthroughs for different ways to implement pipelines and Extract, Transform, Load (ETL) jobs with bit.io. These references can help you keep your data clean, up-to-date, and in a database. We are starting simple, with ways to schedule Jupyter notebooks and simple Python scripts and will continue to add different guides at varying levels of complexity.
All of the walkthroughs include full code implementations that you can use to build the provided demo projects or adapt the examples to support your next data project.
Have a Jupyter notebook that you use to clean and prepare a dataset? This guide walks you through automating Jupyter notebooks to run on a schedule and maintain an up-to-date prepared dataset in a bit.io Postgres database.
Need to Extract, Transform, and Load data from various sources into a database? We walkthrough the fundamental ETL pattern and show how you can implement it with our simple Python scripts to pipeline your data into a Postgres database on bit.io.
In this Part 2 on simple data pipelines, we show how to automate Python and Postgres scripts to keep your datasets up-to-date on bit.io without tedious manual operations to keep up with changing data sources.
Updated about a month ago