In January 2022, we released several new updates, including support for RSS feeds, loop over limits, vertical pagination in web scraper, support for OAuth 1a, and others. On top of that, we have also fixed 22 bugs, making the user experience seamless for our users.
Let’s learn about these features and updates in detail.
For this release, we have added support for RSS feeds. RSS trigger enables you to run flows based on new RSS items. In the RSS feed, users would be able to specify the feed URL and if any authentication is required, they would also be able to enter their login credentials. Also, there’s a Poll Period option that specifies how frequently you are going to check the RSS feed.
By default, RSS feed only checks the records that got in the poll period. For instance, if the poll period is 3 hours, then it’s only going to look at the records that got updated in those 3 hours. However, you can override this by ticking the Use All RSS Items option. The maximum Items to return are by default 10. Usually, you will use 10 during guard testing, but once you are satisfied with your testing, you can change it to All.
Loop over allows the users to run the same task for each element of a loop. We have added a loop limit over records in the loop over dialog box. For enabling this option, you need to click on the horizontal ellipses button and hit the Limit loop over record option. Once you click on the limit loop over record option, it will show you the number of items. You enter your desired number in the Loop over limit field. For instance, if you enter 5 into the loop over limit field, it will only loop over 5 items.
If you don’t want loop over, you can click on the Trash bin icon to delete the loop over limit.
In the web scrapper, we have added a new option called Pagination - Vertical Scrolling that will allow the users to specify the path of the Load more button on a page. In the pagination- vertical scrolling section, you can specify the path of the button in the Next page button/ link XPath field.
You can also specify whether you want to get all the pages or a specific number of pages. Users need to check the All pages option if they want to get all the pages or they can enter a specific number of pages by entering the desired number into the Max pages to scrape field. If the All pages option is checked, the Max pages to scrape will be automatically ignored.
Earlier, Byteline only supported OAuth 2. But now, we have added support for OAuth 1a. So, now you can quickly and easily integrate with legacy systems that still use OAuth 1a.
We have added a copy button on the flow status output dialog that will allow the users to copy the output data. Users need to click on the desired flow run status and click on the i button of the desired node whose details they want to copy. On the output data dialog box, users can click on the Copy to clipboard button to copy the data.
Web scraper, which is used to extract content and data from a website, has become more robust. In this recent update, Web scraper runs more accurately and more robustly and you would be able to extract elements like text, links, rich text, and images from websites with more ease.
So, whatever XPath you specify for web scraping, they work more effectively than ever before.
We have added a Repeatable click XPath feature in the Web Scraper node that will allow you to scrape a list of items that requires additional action. So, if you are scraping information from a list of items and you need to click somewhere for each list item, you can specify that button's path in the repeatable click XPath.
Earlier, the Flow failure email used to be very brief and contained little to no error information. But now, our flow failure email contains more information, and using the information contained in the email, you can easily figure out how to quickly fix the issue.
Lastly, we have fixed over 22 bugs in the January release to make the Byteline platform more stable.
Stay tuned for more great updates, and feel free to connect with us for any doubt!