Unloading from redshift to s3 with headers

Just to complement the answer, to ensure the header row comes first, you don't have to order by a specific column of data. You can enclose the UNIONed selects inside another select, add a ordinal column to them and then in the outer select order by that column without including it in the list of selected columns.

UNLOAD ('

  SELECT column_1, column_2 FROM (

     SELECT 1 AS i,\'column_1\' AS column_, \'column_2\' AS column_2
     UNION ALL
     SELECT 2 AS i, column_1::varchar(255), column_2::varchar(255)
     FROM source_table_for_export_to_s3

  ) t ORDER BY i

')
TO 's3://bucket/path/file_name_for_table_export_in_s3_'

CREDENTIALS
 'aws_access_key_id=...;aws_secret_access_key=...' 

DELIMITER ','
PARALLEL OFF 
ESCAPE
ADDQUOTES;

If any of your columns are non-character, then you need to explicitly cast them as char or varchar because the UNION forces a cast.

Here is an example of the full statement that will create a file in S3 with the headers in the first row.

The output file will be a single CSV file with quotes.

This example assumes numeric values in column_1. You will need to adjust the ORDER BY clause to a numeric column to ensure the header row is in row 1 of the S3 file.

    ******************************************

    /* Redshift export to S3 CSV single file with headers - limit 6.2GB */

    UNLOAD ('

        SELECT \'column_1\',\'column_2\'

      UNION 

        SELECT 


          CAST(column_1 AS varchar(255)) AS column_1,
          CAST(column_2 AS varchar(255)) AS column_2


        FROM source_table_for_export_to_s3 


      ORDER BY 1 DESC

      ;



    ')

    TO 's3://bucket/path/file_name_for_table_export_in_s3_' credentials
     'aws_access_key_id=<key_with_no_<>_brackets>;aws_secret_access_key=<secret_access_key_with_no_<>_brackets>' 


    PARALLEL OFF 


    ESCAPE


    ADDQUOTES


    DELIMITER ','


    ALLOWOVERWRITE


    GZIP


    ;


    ****************************************

There is no direct option provided by redshift unload .

But we can tweak queries to generate files with rows having headers added.

First we will try with parallel off option so that it will create only on file.

"By default, UNLOAD writes data in parallel to multiple files, according to the number of slices in the cluster. The default option is ON or TRUE. If PARALLEL is OFF or FALSE, UNLOAD writes to one or more data files serially, sorted absolutely according to the ORDER BY clause, if one is used. The maximum size for a data file is 6.2 GB. So, for example, if you unload 13.4 GB of data, UNLOAD creates the following three files."

To have headers in unload files we will do as below.

Suppose you have table as below

create table mutable
(
    name varchar(64) default NULL,
    address varchar(512) default NULL
)

Then try to use select command from you unload as below to add headers as well

( select 'name','address') union ( select name,address from mytable )

this will add headers name and address as first line in your output.


As of cluster version 1.0.3945, Redshift now supports unloading data to S3 with header rows in each file i.e.

UNLOAD('select column1, column2 from mytable;')
TO 's3://bucket/prefix/'
IAM_ROLE '<role arn>'
HEADER;

Note: you can't use the HEADER option in conjunction with FIXEDWIDTH.

https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html