python: get all youtube video urls of a channel

Increase max-results from 1 to however many you want, but beware they don't advise grabbing too many in one call and will limit you at 50 (https://developers.google.com/youtube/2.0/developers_guide_protocol_api_query_parameters).

Instead you could consider grabbing the data down in batches of 25, say, by changing the start-index until none came back.

EDIT: Here's the code for how I would do it

import urllib, json
author = 'Youtube_Username'

foundAll = False
ind = 1
videos = []
while not foundAll:
    inp = urllib.urlopen(r'http://gdata.youtube.com/feeds/api/videos?start-index={0}&max-results=50&alt=json&orderby=published&author={1}'.format( ind, author ) )
    try:
        resp = json.load(inp)
        inp.close()
        returnedVideos = resp['feed']['entry']
        for video in returnedVideos:
            videos.append( video ) 

        ind += 50
        print len( videos )
        if ( len( returnedVideos ) < 50 ):
            foundAll = True
    except:
        #catch the case where the number of videos in the channel is a multiple of 50
        print "error"
        foundAll = True

for video in videos:
    print video['title'] # video title
    print video['link'][0]['href'] #url

After the youtube API change, max k.'s answer does not work. As a replacement, the function below provides a list of the youtube videos in a given channel. Please note that you need an API Key for it to work.

import urllib
import json

def get_all_video_in_channel(channel_id):
    api_key = YOUR API KEY

    base_video_url = 'https://www.youtube.com/watch?v='
    base_search_url = 'https://www.googleapis.com/youtube/v3/search?'

    first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)

    video_links = []
    url = first_url
    while True:
        inp = urllib.urlopen(url)
        resp = json.load(inp)

        for i in resp['items']:
            if i['id']['kind'] == "youtube#video":
                video_links.append(base_video_url + i['id']['videoId'])

        try:
            next_page_token = resp['nextPageToken']
            url = first_url + '&pageToken={}'.format(next_page_token)
        except:
            break
    return video_links

Short answer:

Here's a library That can help with that.

pip install scrapetube

import scrapetube

videos = scrapetube.get_channel("UC9-y-6csu5WGm29I7JiwpnA")

for video in videos:
    print(video['videoId'])

Long answer:

The module mentioned above was created by me due to a lack of any other solutions. Here's what i tried:

  1. Selenium. It worked but had three big drawbacks: 1. It requires a web browser and driver to be installed. 2. has big CPU and memory requirements. 3. can't handle big channels.
  2. Using youtube-dl. Like this:
import youtube_dl
    youtube_dl_options = {
        'skip_download': True,
        'ignoreerrors': True
    }
    with youtube_dl.YoutubeDL(youtube_dl_options) as ydl:
        videos = ydl.extract_info(f'https://www.youtube.com/channel/{channel_id}/videos')

This also works for small channels, but for bigger ones i would get blocked by youtube for making so many requests in such a short time (because youtube-dl downloads more info for every video in the channel).

So i made the library scrapetube which uses the web API to get all the videos.