moto: Test cases written in moto using Latest version of boto3 fails

Test cases written in moto makes actual AWS API calls to botocore instead of mocking them. This happens with the latest version of boto3 (1.8.). It used to work fine without issues with 1.7. versions.

Sample code to reproduce error

import boto3
import json
from moto import mock_s3

@mock_s3
def test_mock_s3():
    client = boto3.client('s3', region_name='us-east-1')
    client.create_bucket(Bucket='testbucket')
    response = client.list_buckets()
    print json.dumps(response, default=str)

if __name__ == "__main__":
    test_mock_s3()

Expected result

Method should return the ListBuckets response. It should look something like:

{"Owner": {"DisplayName": "webfile", "ID": "bcaf1ffd86f41161ca5fb16fd081034f"}, "Buckets": [{"CreationDate": "2006-02-03 16:45:09+00:00", "Name": "testbucket"}], "ResponseMetadata": {"RetryAttempts": 0, "HTTPStatusCode": 200, "HTTPHeaders": {"Content-Type": "text/plain"}}}

Actual error

botocore.errorfactory.BucketAlreadyExists: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Full stack trace

Traceback (most recent call last):
  File "testcases.py", line 14, in <module>
    test_mock_s3()
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/moto/core/models.py", line 71, in wrapper
    result = func(*args, **kwargs)
  File "testcases.py", line 8, in test_mock_s3
    client.create_bucket(Bucket='testbucket')
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.BucketAlreadyExists: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Library versions

moto : 1.3.4
boto3 : 1.8.1 - fails
boto3 : 1.7.84 - succeeds

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 60
  • Comments: 84 (14 by maintainers)

Commits related to this issue

Most upvoted comments

It think we should add some documentation to the readme to help clarify how to avoid issues similar to @monkut 's. It’s not great to hear that your unit tests manipulated your real environment, and that is a major problem.

I’m a big fan of pytest and pytest fixtures. For all of my moto tests, I have a conftest.py file where I define the following fixtures:

@pytest.fixture(scope='function')
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
    os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
    os.environ['AWS_SECURITY_TOKEN'] = 'testing'
    os.environ['AWS_SESSION_TOKEN'] = 'testing'

@pytest.fixture(scope='function')
def s3(aws_credentials):
    with mock_s3():
        yield boto3.client('s3', region_name='us-east-1')


@pytest.fixture(scope='function')
def sts(aws_credentials):
    with mock_sts():
        yield boto3.client('sts', region_name='us-east-1')


@pytest.fixture(scope='function')
def cloudwatch(aws_credentials):
    with mock_cloudwatch():
        yield boto3.client('cloudwatch', region_name='us-east-1')

... etc.

All of the AWS/mocked fixtures take in a parameter of aws_credentials, which sets the proper fake environment variables – which is needed. Then, for when I need to do anything with the mocked AWS environment, I do something like:

def test_create_bucket(s3):
    # s3 is a fixture defined above that yields a boto3 s3 client.
    # Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
    s3.create_bucket(Bucket="somebucket")
   
    result = s3.list_buckets()
    assert len(result['Buckets']) == 1
    assert result['Buckets'][0]['Name'] == 'somebucket'

Taking this approach works for all of my tests. I have had some issues with Tox and Travis CI occasionally – and typically I need to do something along the lines of touch ~/.aws/credentials for those to work. However, using the latest moto, boto3, and the fixtures I have above seems to always work for me without issues.

Also Protip: This might be controversial, but I always make use of in-function import statements for my unit tests. This ensures that the mocks are run before any of the actual code is. Example:

def test_something(s3):
   from some.package.that.does.something.with.s3 import some_func # <-- In function import for unit test
   # ^^ Importing here ensures that the mock has been established.      

   sume_func()  # The mock has been established from the fixture, so this function that uses
                # an S3 client will properly use the mock and not reach out to AWS.

Just reporting that I can still reproduce the error with boto3==1.9.70 and moto==1.3.7 by the following:

import boto3
boto3.client('s3')  # ***By having this line, mock_s3 failed and real S3 was created***
import json
from moto import mock_s3

@mock_s3
def test_mock_s3():
    client = boto3.client('s3', region_name='us-east-1')
    client.create_bucket(Bucket='testbucket')
    response = client.list_buckets()
    print json.dumps(response, default=str)

if __name__ == "__main__":
    test_mock_s3()

This runs on

boto3==1.9.70
botocore==1.12.40
moto==1.3.7

Hope this helps.

What’s the current situation with this? 😃 The long term solution, to be exact!

@grudelsud I agree that pinning boto3 is not a viable long-term solution, and introduces problems of its own. But given that users are currently being affected by tests silently falling back to making real boto3 calls (which can have significant unexpected side-effects if you happen to have valid AWS credentials available), it seems worthwhile to consider a quick solution as well a permanent solution. I feel that, for now, the problems from pinning boto3 are lower severity than doing nothing.

That said, I would be a million times happier if someone put forward a genuine fix. But I lack the time and knowledge to do that myself at this point, or even to guess how much work would be involved.

@4lph4-Ph4un I’ve opened a PR that begins the work to move moto off of patching libraries to implement the core functionality and instead use the before-send event we recently added in botocore. There’s still a few kinks that need to be worked out, however.

@spulec Have you had a chance to look at #1847?

It looks like the issue is actually with botocore >= 1.11.0 which no longer uses requests and instead directly uses urllib3: https://github.com/boto/botocore/pull/1495. This means moto probably can’t use responses anymore…

I mean, we don’t do development in production environments, but it’s still expensive when your test suite spins up a couple dozen VPCs and EC2 instances.

more reliable in preventing access to real systems

You should never rely on moto or any other library to prevent (accidental) access to your production resources. The only way to prevent this is to not use production credentials outside of production (dev, test, stage, …).

Edit: I think I just need to be more careful with import order

Is this meant to be fixed? I’m seeing what looks to be a similar issue trying to mock out DynamoDB calls.

moto==1.3.14 boto==2.49.0 boto3==1.14.7 botocore==1.17.7

Partial example:

def setup_table():
    ddb_client = boto3.client('dynamodb')
    ddb_client.create_table(
        AttributeDefinitions=[
            {'AttributeName': 'email', 'AttributeType': 'S'},
            {'AttributeName': 'timestamp', 'AttributeType': 'S'},
        ],
        TableName='contact-form-submissions',
        KeySchema=[
            {'AttributeName': 'email', 'KeyType': 'HASH'},
            {'AttributeName': 'timestamp', 'KeyType': 'RANGE'},
        ],
        ProvisionedThroughput={'ReadCapacityUnits': 1, 'WriteCapacityUnits': 1},
    )

@mock_dynamodb2
def test_save_to_db():
    setup_table()
    result = save_to_db(DATA)
    assert result is True

I’m getting an error:

botocore.errorfactory.ResourceInUseException: An error occurred (ResourceInUseException) when calling the CreateTable operation: Table already exists: contact-form-submissions

And it looks like that table is getting created on real AWS, not mocked as I’d expected. Am I doing something wrong or is there a regression?

Any updates on this?

I’ve updated to

boto==2.49.0
boto3==1.9.47
botocore==1.12.47
moto==1.3.7

and I am still seeing @mikegrima’s error when using @mock_ssm:

ClientError: An error occurred (UnrecognizedClientException) when calling the PutParameter operation: The security token included in the request is invalid.

Plus, I get another error when using @mock_secretsmanager:

ConnectionClosedError: Connection was closed before we received a valid response from endpoint URL: "https://secretsmanager.eu-west-1.amazonaws.com/".

I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it’ll help some people 😃

Same here with

boto==2.49.0
boto3==1.7.84
botocore==1.10.84
moto==1.3.6

Have to set environment variables for AWS otherwise it doesn’t work.

You should never rely on moto or any other library to prevent (accidental) access to your production resources. The only way to prevent this is to not use production credentials outside of production (dev, test, stage, …).

Totally agreed; but you have to account for the human factor (like junior developers!).

Another thing to bear in mind is that moto doesn’t require credentials in the same way as boto, so if you have your local aws credentials set up (eg. in ~/.aws) and you run the tests locally on a project with no credentials override, you will encounter the same problem.

Thinking out loud - but perhaps moto should require credentials - and warn that they shouldn’t be your real creds?

I would say, that people shouldn’t be afraid to run unit tests that use moto risking their real AWS envs being modified. I don’t suspect that there are some folks that would like to run tests that use both real AWS and still mock some bits of AWS. What I’m trying to say is that currently to use moto requires a lot of attention and even that may go in vain with a newer version of AWS libs or moto. Maybe we should all together put some pressure on Amazon so they expose some mocking points to make everyone’s life easier? Amazon has their feedback page IIRC.

Obviously we want to update moto so that it can work with the new implementation of botocore.

Whilst we figure out what this looks like, is it worthwhile restricting moto’s install_requires in setup.py to require only compatible versions of botocore? Clearly this isn’t a perfect solution, but if it saves some pain for some of our users then it could be worthwhile.

EDIT: I have added a PR for this as a discussion starter

I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it’ll help some people 😃

Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.

Changing import order still not working to me 😢 . Using same version as boto3 = “==1.9.189” botocore = “==1.12.189” moto = “==1.3.13” / “==1.3.10”

I can still reproduce the issue with S3 and SQS tests on boto3 = "==1.9.189" botocore = "==1.12.189" moto = "==1.3.13" Same with moto = "==1.3.10"

The fix applied in moto 1.3.5 appears to be incomplete - when using Pipenv on a clean environment to install moto, dependency resolution fails, because moto wants a version of botocore < 1.11 while the current version of boto3 (1.9.4) has a requirement for botocore >= 1.12.4 (so you still have to pin boto3 < 1.8 - which wanted botocore 1.11.0)

I think you probably need to also pin boto3 in your install_requires.

Steps to repro:

  • Install pipenv
  • mkdir moto-repro && cd moto-repro
  • pipenv install moto # installs packages and errors out trying to generate Pipfile.lock
  • pipenv graph # shows installed versions and dependency tree

Note pipenv install 'boto3<1.8.0' moto works

This might (also) be considered a failure in pipenv’s dependency resolution but at the moment, moto isn’t installable using pipenv without doing a bunch of troubleshooting

I agree with @will-ockmore; dependency management can be a tricky thing to get right. Checking the botocore version and raising an error would not only be more reliable in preventing access to real systems, but it would also inform users as to why their tests might have suddenly stopped working.

Can we have a release for the temporary work-around, please?

Everyone has to pin to boto3<1.8 until the proper fix is made in any case; it would be better to have that pin in one place (moto) rather than in every project that depends on it.

@garyd203 I’ve been driving a lot of the changes to get botocore tracking upstream dependencies instead of using our vendored versions. What does moto require so that you guys wouldn’t need to monkey patch our dependencies?

pinning to boto3<1.8 is not really a solution as it would outdate the library pretty soon and doesn’t solve when installing dependencies relying on newer versions either, as this will cause incompatibilities.
I appreciate this comment doesn’t bring much to the table, but can we have an idea of how big of an effort would be to update moto to reflect the current status of libraries?

Noticing an issue with PynamoDB (https://github.com/pynamodb/PynamoDB).

It appears to be making use of the botocore vendored requests library: https://github.com/pynamodb/PynamoDB/blob/master/pynamodb/connection/base.py#L19-L20

Unfortunately, with the latest moto I’m getting:

 pynamodb.exceptions.VerboseClientError: An error occurred (UnrecognizedClientException) on request (...) on table (...) when calling the DescribeTable operation: The security token included in the request is invalid.

@joguSD: what happens now for libraries that are still utilizing the vendored requests in botocore? Are these easy fixes?

EDIT: I just tried overriding their Requests session library as documented here: https://github.com/pynamodb/PynamoDB/issues/558, and it now works 👏 .

In the meantime, I am going to submit a PR to PynamoDB that swaps out the vendored Requests for the normal Requests library and hopefully that will work 🤞 .

I ran into this issue when using moto with aws-xray-sdk. I’m working on a fix: https://github.com/spulec/moto/pull/1808

@spulec I’d rather we reach some sort of compromise that allows you to continue doing what you are now without having to patch anything.

@garyd203 I personally really like option 1, and is a large reason I’ve begun working on refactoring how botocore makes HTTP requests. I’d like to eventually get to a place where it’s relatively easy to provide alternative HTTP clients, but I’m not sure how far out that is as we’ll have to solidify a lot of interfaces we’ve considered internal to botocore.

As for option 2, that seems a little more reasonable in the short term. We’ve been playing with the idea of a before-send event in botocore that would allow custom handlers to return an HTTP response instead of going through botocore’s default HTTP layer.

Hi @joguSD , thanks for asking. I’m just an occasional contributor to moto myself, and I’m not very familiar with the internals of moto. That said, I will try to provide some comments…

The goal of moto is to provide a fake implementation of specified AWS services which are accessed via boto3, as if they were the live services. FWIW, my understanding of the current implementation is that we have achieved this by mocking out HTTPAdapter.send in botocore’s vendored version of requests,so that we can inspect each request and either pass it off to our internal handler for the fake service, or pass it through to the original send. You can see this in moto.core.models, with botocore_mock and ResponsesMockAWS

Moving forward, there seem to be a few choices for how we could do this better. I suspect we are constrained to work with the HTTP request rather than some other part of the boto3/botocore stack. So just to start the discussion, here’s a couple of options:

  1. Use a pluggable HTTP client backend for botocore, so that moto can wrap the standard HTTP backend with it’s own interceptor functionality
  2. Add a filter/interceptor chain in botocore’s HTTP request handling, where moto can inject it’s own filter early on and modify the behaviour based on what request is being made.

Do you have any opinion on these (or other) implementation choices, from botocore’s perspective? My personal preference would be a pluggable backend with a standard implementation that is extensible and/or wrappable.

Again, don’t take any of this discussion as definitive (I can’t speak for the moto project maintainers), but I hope it helps.

______ERROR collecting moto.py _________________
ImportError while importing test module '/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
moto.py:1: in <module>
    from moto import mock_s3
E   ImportError: cannot import name 'mock_s3' from partially initialized module 'moto' (most likely due to a circular import) (/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py)

@drewmullen The name of your test file conflicts with the moto package. Rename your file to something other than moto.py.

I got the same problem and my solution was to import the moto librairies before the boto3 librairie. There are certainly some conflicts between the librairies. Hope it’ll help some people 😃

Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.

It think we should add some documentation to the readme to help clarify how to avoid issues similar to @monkut 's. It’s not great to hear that your unit tests manipulated your real environment, and that is a major problem.

I’m a big fan of pytest and pytest fixtures. For all of my moto tests, I have a conftest.py file where I define the following fixtures:

@pytest.fixture(scope='function')
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
    os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
    os.environ['AWS_SECURITY_TOKEN'] = 'testing'
    os.environ['AWS_SESSION_TOKEN'] = 'testing'

@pytest.fixture(scope='function')
def s3(aws_credentials):
    with mock_s3():
        yield boto3.client('s3', region_name='us-east-1')


@pytest.fixture(scope='function')
def sts(aws_credentials):
    with mock_sts():
        yield boto3.client('sts', region_name='us-east-1')


@pytest.fixture(scope='function')
def cloudwatch(aws_credentials):
    with mock_cloudwatch():
        yield boto3.client('cloudwatch', region_name='us-east-1')

... etc.

All of the AWS/mocked fixtures take in a parameter of aws_credentials, which sets the proper fake environment variables – which is needed. Then, for when I need to do anything with the mocked AWS environment, I do something like:

def test_create_bucket(s3):
    # s3 is a fixture defined above that yields a boto3 s3 client.
    # Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
    s3.create_bucket(Bucket="somebucket")
   
    result = s3.list_buckets()
    assert len(result['Buckets']) == 1
    assert result['Buckets'][0]['Name'] == 'somebucket'

Taking this approach works for all of my tests. I have had some issues with Tox and Travis CI occasionally – and typically I need to do something along the lines of touch ~/.aws/credentials for those to work. However, using the latest moto, boto3, and the fixtures I have above seems to always work for me without issues.

Finally!

Just wanted to report that the issue @jethrolam posted is still valid: https://github.com/spulec/moto/issues/1793#issuecomment-449475823

This has to do with import order! We fixed all of our issues by upgrading moto to latest master and running from moto import mock_s3 as early as possibly (before running any boto3.client/boto3.resource calls).

import boto3
import json
from moto import mock_s3
boto3.client('s3')  # Moving this line to _after_ the import works. Whatever boto3.client('s3') does internally is creating an instance that is not mocked by moto

@mock_s3
def test_mock_s3():
    client = boto3.client('s3', region_name='us-east-1')
    client.create_bucket(Bucket='testbucket')
    response = client.list_buckets()
    print json.dumps(response, default=str)

if __name__ == "__main__":
    test_mock_s3()

@gdoron it’s shipped with moto 1.3.7 which is already available on PyPi. You might have forgotten to specify dummy credentials in your environment variables (like me). See https://github.com/spulec/moto/pull/1907/files#diff-354f30a63fb0907d4ad57269548329e3R26 .

@bernardgardner That is indeed a moto issue and a fix is proposed in https://github.com/spulec/moto/pull/1801

Just ran into this issue with an internal project, the dependencies were not pinned so when running on the CI server the newer version was installed and ran against our infrastructure. Thankfully nothing serious was changed by the tests.

The sub-dependency pinning fixes (#1794 , #1800 ) are possibly not sufficient - and I think for such a potentially damaging / expensive problem further steps should be taken (error on import if the versions are not correct perhaps?)

Obviously best practice is to pin everything, but sadly this doesn’t happen all the time!

With the latest releases available on PyPI, steps to access your real AWS resources with no errors / pip install warnings (using python 3.6.4 on my mac):

# set up environment
$ python -m venv env && source env/bin/activate && pip install boto3 moto
$ python
Python 3.6.4 (default, Mar 12 2018, 13:30:09)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> import moto
>>> with moto.mock_s3():
...     s3 = boto3.resource('s3')
...     print([x for x in s3.buckets.all()])

# see the list of all your real s3 buckets

Install order matters, if you install moto first (eg. pip install moto boto3) it correctly pins botocore; but as this can’t be depended on the problem remains.

As a temporary measure, I’ve merged #1794 and released version 1.3.5.

If anyone does still want to use the newer boto3, it should work properly with the deprecated decorators (@mock_ecs_deprecated, etc). Those decorators mock at the socket level. We had been wanting to move away from that because they are fragile, but we may need to reconsider now.

Another option is to do a comparable thing to what we were doing before, but for urllib3. There looks to be at least one library out there: https://github.com/florentx/urllib3-mock

Finally another option is to stop patching all together and switch to moto only supporting standalone mode. I don’t particularly love this, but if it means avoiding hacky patching issues like this in the future, I think we seriously need to consider it.

So, we’ve released a temporary fix for now, but we need to think through some bigger picture questions in the near-term so people aren’t locked to old boto3/botocore. Since this is probably as good a place as any, feel free to comment here with thoughts.