This means that sometimes there’ll be a new APOD that will be missing from our JSON. The extracted JSON will only have data up to the time when extraction was run. Here's a comparison of timings before and after on-demand scraping: Saving each day's data as it runs allows us to continue from where the failure happened. You might wonder – why not fetch all the data first and save just one file at the end? When making 9000+ network requests, some of them are bound to fail, and you really don't want to have to start back from zero. It stores each day's result as a separate JSON file on the filesystem, and finally combines all the daily JSON data into one single data.json. extractData.ts, which calls getDataByDate with days from a date range (initially "every day between today and June 16th, 1995") using the async library's eachLimit method to make multiple requests in parallel. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |