Automation

How to fetch data effortlessly

In order to allow the data to be fetched periodically, a single function called fetch_and_save_all_data is defined that handles all of the API requests, database connection, and saving. Upon calling this function, all of the API endpoints receive a request, and the succesful ones get saved in the database. As such, it is very handy to have this function be called periodically through a script manager on the NAS server UI.

The load_data.py script

The script that allows for an effortless one-off call to the API endpoints is called load_data.py, located within the project root main folder. Running this script will handle the whole API pipeline, from local variable reading, all the way until database writes and cleanup. This too, is the script that runs on the NAS UI every minute, and that holds the full responsibility for retrieving the source data.

Note

If you wish to see the exact place where the script is being deployed, check out the GitHub README Script Schedulling and Backups section.

Making modifications

If you ever desired to change the functionality of the data fetching pipeline, make sure to avoid any modifications to the existing architecture. First create a new one, put it in place, ensure it is able to work in parallel with the current implementation, and only then turn off the current one. If you do not take this approach, loss of crucial data may be at stake.