dlt sources are iterators or lists and writing them does not require any additional knowledge beyond basic python. dlt sources are also pythonic in nature: they are simple, can be chained, pipelined and composed like any other python iterator or a sequence.
-
quickstartloads a nested json document intoduckdband then queries it with built insql_clientdemonstrating the parent-child table joins. -
sql_querysource andread_tableexample. This source iterates over anySELECTstatement made against database system supported bySqlAlchemy. The example connects to Redshift and iterates a table containing Ethereum transactions. Shows the inferred schema (which nicely preserves typing). Mind that our source is a one-liner :) -
rasaexample andrasa_tracker_storesource extracts rasa tracker store events to a set of inferred tables. It shows a few common patterns
- shows how to pipeline resources: it depends on a "head" resource that reads base data (ie. events from kafka/postgres/file). the dependent resource is called
transformer - it shows how to write stream resource which creates table schemas and sends data to those tables depending on the event type
- it stores
last_timestamp_valuein the state
singer_tap,stdoutandsinger_tap_exampleis fully functional wrapper for any singer/meltano source
- clones the desired tap, installs it and runs it in a virtual env
- passes the catalog and config files
- like rasa it is a transformer (on stdio pipe) and
streamresource - it stores singer state in
dltstate
-
singer_tap_jsonl_examplelike the above but instead of process pipe it reads singer messages from file. it creates a huge hubspot schema. -
google_sheetsa source that returns values from specified sheet. The example takes a sheet, infers a schema, loads it to BigQuery/Redshift and displays inferred schema. it uses thesecrets.tomlto manage credentials and is an example of one-liner pipeline -
chessan example of a pipeline project with its own config and credential files. it is also an example of how transformers are connected to resources and resource selection. it should be run from examples/chess` folder. It also shows: how to use retry decorator and how to run resources/transformers in parallel with a decorator -
chess/chess_dbt.py: an example of adbttransformations package working with a dataset loaded bydlt. The package is incrementally processing the loaded data following the new loaded packages stored in_dlt_loadstable at the end of every pipeline run. Note the automatic usage of isolated virtual environment to run dbt and sharing of the credentials. -
run_dbt_jaffleruns dbt's jaffle shop example taken directly from the github repo and queries the results withsql_client.duckdbdatabase is used to load and transform the data. The databasewriteaccess is passed fromdlttodbtand back.
Not yet ported:
-
discord_iteratoran example that load example discord data (messages, channels) into warehouse from supplied files. Shows several auxiliary pipeline functions and an example of pipelining iterators (withmapfunction). You can also see that produced schema is quite complicated due to several layers of nesting. -
ethereumsource shows that you can build highly scalable, parallel and robust sources as simple iterators.