Why do I sometimes get Key Error using SQS client

This github issue suggests you should set the sqs client in the top-level once (rather than in the function):

sqs = boto3.client('sqs',
                   region_name=S3_BUCKET_REGION,
                   aws_access_key_id=AWS_ACCESS_KEY_ID,
                   aws_secret_access_key=AWS_SECRET_ACCESS_KEY)


def consume_msgs():
    # code to process message

Maybe I misunderstand some of the other answers, but in the case of multithreaded execution, I don't think that having one boto3 client object and passing it to other functions will work if those functions are executed in separate threads. I've been experiencing sporadic endpoint_resolver errors invoking a boto3 client service, and they were stopped by following the example in the documentation and the comments on boto3 GitHub issues such as #1246 and #1592, and creating a separate session object in each thread. In my case, it meant an almost trivial change in my code, going from

client = boto3.client(variant, region_name = creds['region_name'],
                      aws_access_key_id = ...,
                      aws_secret_access_key = ...)

to

session = boto3.session.Session()
client = session.client(variant, region_name = creds['region_name'],
                        aws_access_key_id = ...,
                        aws_secret_access_key = ...)

in the function that is executed in separate threads. My reading of the OP's code for consume_msgs() is that a similar change could be made and it would eliminate the occasional endpoint_resolver error.