Images were cached in memory using a memoized supplier. To allow growing
this cache with the discovered images, the ImageCacheSupplier class has
been created. It provides an in-memory cache with all discovered images
and acts as a view over the image cache that also provides access to
them.
The in-memory cache for the discovered images expires with the session,
just as the image cache does.
The default memoized image supplier has been changed to the
ImageCacheSupplier, to make sure all providers get injected the right
instance, and the old supplier has been qualified with the 'imageCache'
name, in case a provider needs the basic image cache.
Replaced hard-coded Strings in toString methods of static predicates with their enum.toString counterparts
Added test 'testNodeRunningFailsOnSuspended'
Revert "Added test 'testNodeRunningFailsOnSuspended'"
This reverts commit 2a543bfe20540bb4f10ef4f86e845a63bdbe90e3.
Removed test 'testNodeRunningFailsOnSuspended'. Added test 'testNodeSuspendedReturnsTrueWhenSuspended'.
Renamed '
Revert "Renamed '"
This reverts commit 061e9292a812066562ab47ba5eea15337fc13c3d.
Renamed 'AtomicNodeSuspended.nodeRunning' to 'AtomicNodeSuspended.nodeSuspended'.\nWhere applicable combined all calls to 'replay(Object...)' instead of the old 'replay(node);replay(computeService);'
If the TemplateBuilderImpl is given an imageId but the image can not be
found in the image cache, fallback to the GetImageStrategy to perform a
call to the provider to try to get it.
We've seen that in some cases images are not returned in the image list
but they actually exist in the provider. This fix won't make them
available when filtering by other properties such as the operating system,
etc, but at least will make them available if their id is known.
Added the new ElasticHosts regions.
Updated the ElasticStack api to get the list of standard
drives using an API call. All providers except ServerLove
support the new API call, so the old logic in the ElasticStack
api has been moved to that provider. The rest of providers will now
extract all the OperatingSystem information by parsing the name of the
StandardDrive.
A unit test has been added to the ElasticStack api with all the images
that were hardcoded, to make sure all names are still parsed as expected
and all information in the existing providers is kept.
Modified the default template for all ElasticHosts providers to
match newer Ubuntu images and updated the Template*Live tests
accordingly.
Also refactored the WellKnownImage map to a supplier to lazy load it
when needed and avoid unexpected errors when building the Guice injector
if there are authentication errors or similar.
The only users of this seem to be
org.jclouds.atmos.blobstore.strategy.FindMD5InUserMetadata and
org.jclouds.azureblob.blobstore.strategy.FindMD5InBlobProperties which
are themselves unused.
Rackspace cloud limits instance metadata to 5 key-value pairs, but
upstream Nova only sets the limit at 128 by default. This patch removes
the limit entirely; the official python clients don't check it and the
server is responsible for enforcing it anyway.
Fixes: https://issues.apache.org/jira/browse/JCLOUDS-507
When issuing many simultaneous requests to Synaptic Atmos I observed:
HTTP/1.1 failed with code 500, error: AtmosError
[code=1040, message=The server is busy. Please try again.]
Previously all clients slept for fixed intervals and thus retried
around the same time. This commit adds a random delay which should
better distribute load on the provider.
To repro issue 342, the following steps were taken:
- Spawn 500 parallel / 2000 total putBlobs to cloudfiles-uk
- issue a SIGSTOP
- wait 60 seconds
- issue a SIGCONT
Without this patch, there are several hundred 408s.
With this patch, these are retried and complete successfully.
Previously jclouds could use an unlimited number of threads on its
user ExecutorService. While this ExecutorService will go away when we
complete deasyncafication, we should prevent jclouds from misbehaving
until that time.
FileBackedOutputStream.asByteSource.getInput returns a FileInputStream
which we do not close. We later call FileBackedOutputStream.reset
which removes the underlying File. This fails on Windows which does
not support deleting an open file and leaks resources on other
platforms. Eagerly close to address this issue.