First get URL and put into a text input tool then get a download tool and untick the encode URL text. Put a browse tool at the end then run and click on the first record under download data header then select hold shift then scroll to bottom select again and put into sublime text.
to help inspect the webpage right click and inspect then click that arrow and hover over different parts to get a better idea at where you need to look for the parts of html to tokenize.
So essentially when webscraping look for the start of where everything you want is, The image above shows parts from the xml where the title for each episode and information relating to that episode are, I looked in the main text using ctrl +f for the titles which I already knew which helped to decipher where to look. here starts at div class info then want to include everything up until the next episode with all the information so the meta itemprop.
So want to tokenize and split all this information to rows and then parse everything out in regex.
The next part is to just parse everything out using regex, So from the image below we can see that here you are only trying to pick out certain parts like dates. You can also see from the sublime text web scrape image how the regex here relates. Then just parse out all the useful information by looking at the text.
Another thing you might want to do is remove the number at the end of the URL this allows you to pull in all the seasons of better call saul. But first we need to generate rows for all the seasons as we know there are 6 seasons we generate 5 new rows in the generate row tool.
Then we just want to add a formula tool to bring back that number on the URL we got rid of and increment it allowing us to pull in all seasons. This method helps us to avoid repeating the process 6 times.