how to get all websiteslinks of pages beautifulsoup code example

Example 1: get all href links beautifulsoup from a website python

from BeautifulSoup import BeautifulSoupimport urllib2import rehtml_page = urllib2.urlopen("https://arstechnica.com")soup = BeautifulSoup(html_page)links = []for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):    links.append(link.get('href'))print(links)

Example 2: get all href links beautifulsoup from a website python

from BeautifulSoup import BeautifulSoupimport urllib2import rehtml_page = urllib2.urlopen("https://arstechnica.com")soup = BeautifulSoup(html_page)for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):    print link.get('href')