Overview
RamNode Cloud Object Storage provides an S3-compatible API, allowing you to use popular S3 tools and libraries to manage your data. This means you can use the same tools you're already familiar with from AWS S3.
Prerequisites
Step 1: Setup OpenStack CLI
To get started, you'll need to setup the OpenStack CLI client to generate credentials for the S3 API.
Follow our OpenStack SDK Tutorial for detailed setup instructions. Make sure to select the region where you've created your object stores when exporting credentials.
Generate S3 Credentials
Step 2: Create EC2 Credentials
Once the OpenStack CLI is working, create your S3-compatible credentials:
openstack ec2 credentials createThe output will look like this:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| access | 00000000000000000000000000000000 |
| links | {'self': '[...]'} |
| project_id | 00000000000000000000000000000000 |
| secret | 00000000000000000000000000000000 |
| trust_id | None |
| user_id | 00000000000000000000000000000000 |
+------------+----------------------------------+Important
Save the "access" and "secret" values securely. These are your access key ID and secret access key for S3 API access.
S3 Endpoints
The endpoint for your S3 client will be the controller domain name on port 8080. Choose the endpoint based on your region:
Los Angeles
https://lax-controller.ramnode.com:8080New Jersey
https://ewr-controller.ramnode.com:8080Netherlands
https://nlx-controller.ramnode.com:8080Seattle - Not Supported Yet
https://sea-controller.ramnode.com:8080Atlanta - Not Supported Yet
https://atl-controller.ramnode.com:8080S3 Browser Configuration
Step 3: Configure S3 Browser
If you're using the S3 Browser application, configure it with these settings:
- Account Name: Any descriptive name
- Account Type: S3 Compatible Storage
- REST Endpoint: Your region's controller endpoint (e.g., lax-controller.ramnode.com)
- Access Key ID: Your "access" value from credentials
- Secret Access Key: Your "secret" value from credentials
Python with boto3
Step 4: List Objects Example
Here's a complete example for listing objects using the boto3 Python library:
import boto3
# Assign your credentials (replace with your own values)
ACCESS_KEY_ID = '00000000000000000000000000000000'
SECRET_ACCESS_KEY = '00000000000000000000000000000000'
# Set the appropriate API URL for your region
S3_API_URL = 'https://lax-controller.ramnode.com:8080'
# Create an S3 client with your credentials and API URL
s3 = boto3.client('s3',
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY,
endpoint_url=S3_API_URL)
# Define the bucket and prefix you want to list objects for
bucket_name = 'ExampleBucket'
prefix = ''
# List objects in the bucket with the specified prefix
response = s3.list_objects(Bucket=bucket_name, Prefix=prefix)
# Print the object names
for obj in response['Contents']:
print(obj['Key'])Common Operations
Step 5: Upload Files
# Upload a file to a bucket
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')
# Or use put_object for more control
with open('local-file.txt', 'rb') as f:
s3.put_object(Bucket='my-bucket', Key='remote-file.txt', Body=f)Step 6: Download Files
# Download a file from a bucket
s3.download_file('my-bucket', 'remote-file.txt', 'local-file.txt')
# Or use get_object
response = s3.get_object(Bucket='my-bucket', Key='remote-file.txt')
with open('local-file.txt', 'wb') as f:
f.write(response['Body'].read())Step 7: Delete Objects
# Delete an object from a bucket
s3.delete_object(Bucket='my-bucket', Key='remote-file.txt')Step 8: List Buckets
# List all buckets
response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])Using AWS CLI
You can also use the AWS CLI with RamNode Object Storage:
# Configure AWS CLI with your credentials
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set default.region us-east-1
# Use the --endpoint-url parameter for all commands
aws s3 ls --endpoint-url=https://lax-controller.ramnode.com:8080
# Upload a file
aws s3 cp file.txt s3://my-bucket/ --endpoint-url=https://lax-controller.ramnode.com:8080
# Download a file
aws s3 cp s3://my-bucket/file.txt . --endpoint-url=https://lax-controller.ramnode.com:8080
# Sync a directory
aws s3 sync ./local-dir s3://my-bucket/ --endpoint-url=https://lax-controller.ramnode.com:8080Frequently Asked Questions
How do I create an object store?
In the Cloud Control Panel, open the sidebar, expand "Cloud", and click "Object store". Select your region at the top right and click the "+" button. If the button is greyed out, object storage is not currently supported in that region.
How do I manage ACLs/permissions?
Currently, each credential has full access to all buckets. For security best practices:
- Create separate credentials for different applications or team members
- Rotate credentials regularly
- Never commit credentials to version control
- Use environment variables to store credentials
What S3 features are supported?
Our S3-compatible API supports most common S3 operations:
- Bucket operations (create, delete, list)
- Object operations (put, get, delete, list)
- Multipart uploads
- Object metadata
- Pre-signed URLs (limited support)
Troubleshooting
Connection Issues
If you're having trouble connecting:
- Verify you're using the correct endpoint for your region
- Ensure port 8080 is not blocked by your firewall
- Check that your credentials are correctly configured
- Make sure you've created at least one object store in the region
Access Denied Errors
If you receive access denied errors:
- Verify your access key and secret key are correct
- Ensure the bucket exists in the correct region
- Check that you've selected the right region when generating credentials
Pro Tip: Testing Your Setup
Start by listing your buckets using the AWS CLI or a simple boto3 script. This quickly confirms your credentials and endpoint are configured correctly before attempting more complex operations.
