Reading a file from a private S3 bucket to a pandas dataframe
Update for pandas 0.22 and up:
If you have already installed s3fs (pip install s3fs
) then you can read the file directly from s3 path, without any imports:
data = pd.read_csv('s3://bucket....csv')
stable docs
Based on this answer, I found smart_open
to be much simpler to use:
import pandas as pd
from smart_open import smart_open
initial_df = pd.read_csv(smart_open('s3://bucket/file.csv'))
Updated for Pandas 0.20.1
Pandas now uses s3fs to handle s3 coonnections. link
pandas now uses s3fs for handling S3 connections. This shouldn’t break any code. However, since s3fs is not a required dependency, you will need to install it separately, like boto in prior versions of pandas.
import os
import pandas as pd
from s3fs.core import S3FileSystem
# aws keys stored in ini file in same path
# refer to boto3 docs for config settings
os.environ['AWS_CONFIG_FILE'] = 'aws_config.ini'
s3 = S3FileSystem(anon=False)
key = 'path\to\your-csv.csv'
bucket = 'your-bucket-name'
df = pd.read_csv(s3.open('{}/{}'.format(bucket, key), mode='rb'))
# or with f-strings
df = pd.read_csv(s3.open(f'{bucket}/{key}', mode='rb'))
Pandas uses boto
(not boto3
) inside read_csv
. You might be able to install boto and have it work correctly.
There's some troubles with boto and python 3.4.4 / python3.5.1. If you're on those platforms, and until those are fixed, you can use boto 3 as
import boto3
import pandas as pd
s3 = boto3.client('s3')
obj = s3.get_object(Bucket='bucket', Key='key')
df = pd.read_csv(obj['Body'])
That obj
had a .read
method (which returns a stream of bytes), which is enough for pandas.