Introduction
This guide demonstrates how to use the OpenStack API for programmatic management of cloud infrastructure. It covers common operations and demonstrates how to leverage the OpenStack API to automate VPS provisioning, networking, storage, and more.
What You'll Learn
- Authenticating with the OpenStack API
- Managing compute instances (VPS servers)
- Configuring private networks and floating IPs
- Creating and managing block storage volumes
- Working with snapshots and backups
- Managing SSH keys and security
- Monitoring resource usage and quotas
Prerequisites
- Active OpenStack account with API access enabled
- OpenStack credentials (available in your cloud control panel)
- Python 3.8+ (for Python examples) or curl (for direct API calls)
- Basic understanding of REST APIs and JSON
API Endpoints
Throughout this guide, you'll see endpoint variables. These represent the base URLs for different OpenStack services:
- Identity (Keystone):
https://lax-controller.ramnode.com:5000/v3 - Compute (Nova): Available after authentication
- Networking (Neutron): Available after authentication
- Block Storage (Cinder): Available after authentication
- Image (Glance): Available after authentication
Authentication
Obtaining API Credentials
- Log into your Cloud Control Panel
- Navigate to API Access under Account Settings
- Generate or retrieve your OpenStack credentials:
- Auth URL: Your OpenStack Keystone endpoint
- Username: Your cloud username
- Password: Your API password
- Project ID: Your tenant/project identifier
- Domain: Typically
default
Using Python OpenStack SDK (Recommended)
Install the OpenStack SDK:
pip install openstacksdkimport openstack
# Create connection - SDK handles token management automatically
conn = openstack.connect(
auth_url='https://lax-controller.ramnode.com:5000/v3',
project_name='your-project-name',
username='your-username',
password='your-api-password',
region_name='NYC', # or SEA, ATL, NL depending on your region
user_domain_name='default',
project_domain_name='default'
)
# Verify connection
print(f"Connected to {conn.config.get_region_name()}")Using cURL for REST API
curl -X POST https://lax-controller.ramnode.com:5000/v3/auth/tokens \
-H "Content-Type: application/json" \
-d '{
"auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "your-username",
"domain": {"name": "default"},
"password": "your-api-password"
}
}
},
"scope": {
"project": {
"name": "your-project-name",
"domain": {"name": "default"}
}
}
}
}' -iThe token is returned in the X-Subject-Token header. Save this for subsequent requests.
Using Configuration Files
Create ~/.config/openstack/clouds.yaml:
clouds:
ramnode:
auth:
auth_url: https://lax-controller.ramnode.com:5000/v3
username: your-username
password: your-api-password
project_name: your-project-name
user_domain_name: default
project_domain_name: default
region_name: NYC
interface: publicThen connect simply with:
import openstack
conn = openstack.connect(cloud='ramnode')Instance Management
Listing Available Flavors (VPS Plans)
Flavors define the CPU, RAM, and disk resources for your VPS.
curl -X GET https://${NOVA_ENDPOINT}/v2.1/flavors/detail \
-H "X-Auth-Token: $TOKEN"# List all available flavors
for flavor in conn.compute.flavors():
print(f"{flavor.name}: {flavor.vcpus} vCPUs, {flavor.ram}MB RAM, {flavor.disk}GB disk")Listing Available Images
# List available OS images
for image in conn.compute.images():
print(f"{image.name} (ID: {image.id})")Creating a VPS Instance
# Encode user_data as base64 first
USER_DATA=$(echo '#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx' | base64 -w 0)
curl -X POST https://${NOVA_ENDPOINT}/v2.1/servers \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"server": {
"name": "web-server-01",
"imageRef": "image-uuid",
"flavorRef": "flavor-id",
"networks": [{"uuid": "network-uuid"}],
"key_name": "my-ssh-key",
"user_data": "'"$USER_DATA"'"
}
}'# Create a new instance
server = conn.compute.create_server(
name='web-server-01',
image_id='image-uuid', # Ubuntu 24.04 image ID
flavor_id='flavor-id', # 2GB plan
networks=[{'uuid': 'network-uuid'}],
key_name='my-ssh-key', # SSH key name (optional)
user_data='''#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx
'''
)
# Wait for server to become active
conn.compute.wait_for_server(server)
print(f"Server {server.name} is now {server.status}")Listing Instances
# List all servers
for server in conn.compute.servers():
print(f"{server.name}: {server.status} - {server.addresses}")Instance Actions
# Soft reboot (graceful)
conn.compute.reboot_server('server-id', 'SOFT')
# Hard reboot (power cycle)
conn.compute.reboot_server('server-id', 'HARD')
# Stop (shut down)
conn.compute.stop_server('server-id')
# Start (power on)
conn.compute.start_server('server-id')
# Resize to a different flavor
conn.compute.resize_server('server-id', 'new-flavor-id')
# Wait for resize to complete
server = conn.compute.wait_for_server('server-id', status='VERIFY_RESIZE')
# Confirm the resize
conn.compute.confirm_server_resize('server-id')
# Delete server
conn.compute.delete_server('server-id')Accessing the VNC Console
# Get VNC console URL
console = conn.compute.create_server_remote_console(
server='server-id',
protocol='vnc',
console_type='novnc'
)
print(f"Console URL: {console.url}")Networking
Creating a Private Network
# Create network
curl -X POST https://${NEUTRON_ENDPOINT}/v2.0/networks \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"network": {
"name": "private-network-01",
"admin_state_up": true
}
}'
# Create subnet
curl -X POST https://${NEUTRON_ENDPOINT}/v2.0/subnets \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"subnet": {
"name": "private-subnet-01",
"network_id": "network-uuid",
"cidr": "10.0.1.0/24",
"ip_version": 4,
"enable_dhcp": true,
"dns_nameservers": ["8.8.8.8", "8.8.4.4"]
}
}'# Create a private network
network = conn.network.create_network(
name='private-network-01',
description='Internal network for web servers'
)
# Create a subnet
subnet = conn.network.create_subnet(
name='private-subnet-01',
network_id=network.id,
cidr='10.0.1.0/24',
ip_version=4,
enable_dhcp=True,
dns_nameservers=['8.8.8.8', '8.8.4.4']
)
print(f"Network created: {network.name} ({network.id})")
print(f"Subnet CIDR: {subnet.cidr}")Floating IPs (Public IPs)
# List your floating IPs
for ip in conn.network.ips():
print(f"{ip.floating_ip_address}: {ip.status} (Fixed: {ip.fixed_ip_address})")
# Allocate from public pool
floating_ip = conn.network.create_ip(
floating_network_id='external-network-id'
)
print(f"Allocated IP: {floating_ip.floating_ip_address}")
# Associate floating IP with server
conn.compute.add_floating_ip_to_server(
server='server-id',
address=floating_ip.floating_ip_address
)
# Remove floating IP from server
conn.compute.remove_floating_ip_from_server(
server='server-id',
address=floating_ip.floating_ip_address
)
# Delete/release the floating IP
conn.network.delete_ip(floating_ip)Security Groups
# Create security group
sec_group = conn.network.create_security_group(
name='web-servers',
description='Allow HTTP, HTTPS, and SSH'
)
# Add SSH rule
conn.network.create_security_group_rule(
security_group_id=sec_group.id,
direction='ingress',
ethertype='IPv4',
protocol='tcp',
port_range_min=22,
port_range_max=22,
remote_ip_prefix='0.0.0.0/0'
)
# Add HTTP rule
conn.network.create_security_group_rule(
security_group_id=sec_group.id,
direction='ingress',
ethertype='IPv4',
protocol='tcp',
port_range_min=80,
port_range_max=80,
remote_ip_prefix='0.0.0.0/0'
)
# Add HTTPS rule
conn.network.create_security_group_rule(
security_group_id=sec_group.id,
direction='ingress',
ethertype='IPv4',
protocol='tcp',
port_range_min=443,
port_range_max=443,
remote_ip_prefix='0.0.0.0/0'
)
# Add security group to server
conn.compute.add_security_group_to_server(
server='server-id',
security_group='web-servers'
)Block Storage (Volumes)
Creating a Volume
# Create a 100GB volume
volume = conn.block_storage.create_volume(
name='data-volume-01',
size=100, # Size in GB
description='Database storage volume'
)
# Wait for volume to become available
conn.block_storage.wait_for_status(volume, status='available')
print(f"Volume created: {volume.id}")Attaching a Volume to an Instance
# Attach volume to server
attachment = conn.compute.create_volume_attachment(
server='server-id',
volume_id=volume.id,
device='/dev/vdb' # Device name on the instance
)
# Wait for attachment
conn.block_storage.wait_for_status(volume, status='in-use')
print(f"Volume attached to {attachment.server_id}")On the instance, you'll need to format and mount the volume:
# Format the volume
sudo mkfs.ext4 /dev/vdb
# Create mount point
sudo mkdir /mnt/data
# Mount the volume
sudo mount /dev/vdb /mnt/data
# Add to fstab for persistent mounting
echo '/dev/vdb /mnt/data ext4 defaults 0 0' | sudo tee -a /etc/fstabResizing a Volume
# Extend volume size (can only increase)
conn.block_storage.extend_volume(volume, 200) # New size in GB
# Wait for completion
conn.block_storage.wait_for_status(volume, status='available')After resizing, extend the filesystem on the instance:
sudo resize2fs /dev/vdbSnapshots & Backups
Creating an Instance Snapshot
# Create snapshot of running instance
image = conn.compute.create_server_image(
server='server-id',
name='web-server-backup-2025-01-14',
metadata={'backup_type': 'manual', 'created_by': 'api'}
)
# Wait for snapshot to complete
conn.compute.wait_for_server(server, status='ACTIVE')
print(f"Snapshot created: {image.id}")Creating a Volume Snapshot
# Create snapshot of volume
snapshot = conn.block_storage.create_snapshot(
volume_id=volume.id,
name='data-snapshot-2025-01-14',
description='Daily backup',
force=True # Allow snapshot of in-use volume
)
# Wait for snapshot to complete
conn.block_storage.wait_for_status(snapshot, status='available')Restoring from Snapshots
# Create new instance from snapshot image
restored_server = conn.compute.create_server(
name='restored-web-server',
image_id=image.id,
flavor_id='flavor-uuid',
networks=[{'uuid': 'network-uuid'}]
)# Create new volume from snapshot
restored_volume = conn.block_storage.create_volume(
name='restored-data-volume',
snapshot_id=snapshot.id,
size=snapshot.size
)Automated Backups
import schedule
import time
from datetime import datetime, timedelta
def backup_instance(server_id, retention_days=7):
"""Create snapshot and clean old ones"""
# Create new snapshot
timestamp = datetime.now().strftime('%Y%m%d-%H%M%S')
image = conn.compute.create_server_image(
server=server_id,
name=f'auto-backup-{timestamp}',
metadata={'auto_backup': 'true', 'created': timestamp}
)
# Clean old snapshots
cutoff = datetime.now() - timedelta(days=retention_days)
for img in conn.compute.images():
if img.metadata.get('auto_backup') == 'true':
created = datetime.fromisoformat(img.metadata.get('created'))
if created < cutoff:
conn.compute.delete_image(img)
print(f"Deleted old backup: {img.name}")
print(f"Created backup: {image.name}")
# Schedule daily backups at 2 AM
schedule.every().day.at("02:00").do(backup_instance, 'server-id')
while True:
schedule.run_pending()
time.sleep(60)SSH Key Management
Uploading SSH Public Keys
curl -X POST https://${NOVA_ENDPOINT}/v2.1/os-keypairs \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"keypair": {
"name": "my-workstation",
"public_key": "'"$(cat ~/.ssh/id_rsa.pub)"'"
}
}'import os
# Upload SSH public key
with open(os.path.expanduser('~/.ssh/id_rsa.pub'), 'r') as f:
public_key = f.read()
keypair = conn.compute.create_keypair(
name='my-workstation',
public_key=public_key
)
print(f"Key uploaded: {keypair.name}")
# List all keypairs
for keypair in conn.compute.keypairs():
print(f"{keypair.name}: {keypair.fingerprint}")
# Delete keypair
conn.compute.delete_keypair('my-workstation')Using SSH Keys with Instances
# Create instance with SSH key
server = conn.compute.create_server(
name='secure-server',
image_id='image-uuid',
flavor_id='flavor-uuid',
networks=[{'uuid': 'network-uuid'}],
key_name='my-workstation' # SSH key will be injected
)Metadata & Tags
Setting Instance Metadata
# Set metadata on server
conn.compute.set_server_metadata(
server='server-id',
**{
'environment': 'production',
'application': 'wordpress',
'cost_center': 'marketing',
'managed_by': 'terraform'
}
)
# Get server metadata
metadata = conn.compute.get_server_metadata('server-id')
for key, value in metadata.items():
print(f"{key}: {value}")Using Metadata for Organization
from collections import defaultdict
# Find all production servers
production_servers = [
server for server in conn.compute.servers()
if server.metadata.get('environment') == 'production'
]
# Calculate costs by cost center
costs = defaultdict(float)
for server in conn.compute.servers():
cost_center = server.metadata.get('cost_center', 'unassigned')
# Example: using RAM as a proxy for cost
costs[cost_center] += server.flavor['ram'] * 0.001
for center, cost in costs.items():
print(f"{center}: ${cost:.2f}/hour")Monitoring & Quotas
Checking Quotas
# Get compute quotas
limits = conn.compute.get_limits()
print(f"Max instances: {limits.absolute.max_total_instances}")
print(f"Current instances: {limits.absolute.total_instances_used}")
print(f"Max cores: {limits.absolute.max_total_cores}")
print(f"Current cores: {limits.absolute.total_cores_used}")
print(f"Max RAM: {limits.absolute.max_total_ram_size}MB")
print(f"Current RAM: {limits.absolute.total_ram_used}MB")Setting Up Usage Alerts
import smtplib
from email.message import EmailMessage
def check_quota_usage(threshold=0.8):
"""Alert if resource usage exceeds threshold"""
limits = conn.compute.get_limits()
# Check instances
instance_usage = limits.absolute.total_instances_used / limits.absolute.max_total_instances
if instance_usage > threshold:
send_alert(f"Instance usage at {instance_usage*100:.1f}%")
# Check RAM
ram_usage = limits.absolute.total_ram_used / limits.absolute.max_total_ram_size
if ram_usage > threshold:
send_alert(f"RAM usage at {ram_usage*100:.1f}%")
def send_alert(message):
msg = EmailMessage()
msg['Subject'] = 'OpenStack Quota Alert'
msg['From'] = 'alerts@yourdomain.com'
msg['To'] = 'admin@yourdomain.com'
msg.set_content(message)
with smtplib.SMTP('localhost') as s:
s.send_message(msg)Best Practices
1. Error Handling
Always implement proper error handling for API calls:
from openstack.exceptions import ResourceNotFound, BadRequestException
try:
server = conn.compute.create_server(
name='test-server',
image_id='image-uuid',
flavor_id='flavor-uuid',
networks=[{'uuid': 'network-uuid'}]
)
except BadRequestException as e:
print(f"Invalid request: {e}")
except Exception as e:
print(f"Unexpected error: {e}")2. Pagination
Handle large result sets with pagination:
# Paginate through all servers
marker = None
while True:
servers = list(conn.compute.servers(marker=marker, limit=100))
if not servers:
break
for server in servers:
print(server.name)
marker = servers[-1].id3. Rate Limiting
Implement rate limiting to avoid API throttling:
import time
from functools import wraps
def rate_limit(calls_per_second=2):
"""Decorator to rate limit API calls"""
min_interval = 1.0 / calls_per_second
last_called = [0.0]
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
elapsed = time.time() - last_called[0]
if elapsed < min_interval:
time.sleep(min_interval - elapsed)
result = func(*args, **kwargs)
last_called[0] = time.time()
return result
return wrapper
return decorator
@rate_limit(calls_per_second=2)
def create_server_safe(**kwargs):
return conn.compute.create_server(**kwargs)4. Cost Optimization
Monitor and optimize costs by finding unused resources:
def audit_unused_resources():
"""Find unused resources that may incur costs"""
issues = []
# Find detached volumes
for volume in conn.block_storage.volumes():
if volume.status == 'available':
issues.append(f"Unused volume: {volume.name} ({volume.size}GB)")
# Find unused floating IPs
for ip in conn.network.ips():
if not ip.fixed_ip_address:
issues.append(f"Unassigned floating IP: {ip.floating_ip_address}")
# Find stopped instances
for server in conn.compute.servers():
if server.status == 'SHUTOFF':
issues.append(f"Stopped instance: {server.name}")
return issues
# Run monthly cost audit
print("Cost Optimization Opportunities:")
for issue in audit_unused_resources():
print(f" - {issue}")Troubleshooting
Quota Exceeded Error
# Check current usage
limits = conn.compute.get_limits()
print(f"Instances: {limits.absolute.total_instances_used}/{limits.absolute.max_total_instances}")
print(f"Cores: {limits.absolute.total_cores_used}/{limits.absolute.max_total_cores}")
print(f"RAM: {limits.absolute.total_ram_used}MB/{limits.absolute.max_total_ram_size}MB")
# Solution: Delete unused resources or contact support for quota increaseInstance Stuck in BUILD State
# Check instance fault
server = conn.compute.get_server('server-id')
if server.fault:
print(f"Error: {server.fault['message']}")
# Common causes:
# - Invalid flavor/image combination
# - Network configuration issues
# - Resource constraints on hypervisorUnable to Connect to Instance
# Verify security group rules
server = conn.compute.get_server('server-id')
for sg in server.security_groups:
sec_group = conn.network.get_security_group(sg['name'])
print(f"Security Group: {sec_group.name}")
for rule in conn.network.security_group_rules(security_group_id=sec_group.id):
print(f" {rule.direction}: {rule.protocol}:{rule.port_range_min}-{rule.port_range_max}")
# Verify floating IP association
print(f"Server IPs: {server.addresses}")Enable Debug Mode
import logging
# Enable OpenStack SDK debug logging
logging.basicConfig(level=logging.DEBUG)
openstack.enable_logging(debug=True)
# Now all API calls will be logged
conn = openstack.connect(cloud='ramnode')Additional Resources
- OpenStack API Reference
- Python OpenStack SDK Documentation
- OpenStack API - Common Questions
- OpenStack SDK Tutorial
- Terraform Deployment Guide
- Ansible Deployment Guide
Need Help?
If you have questions or need assistance with the OpenStack API, please contact our support team.
