This guide walks you through setting up AWS S3 for your Spring Boot application deployment on EC2.
- ✅ EC2 instance already running
- AWS Console access with appropriate permissions
-
Log into AWS Console
- Go to https://console.aws.amazon.com
- Navigate to S3 service (search "S3" in the top search bar)
-
Create Bucket
- Click "Create bucket" button
- Bucket name: Enter a unique name (e.g.,
yourcompany-product-images-prod)⚠️ Bucket names must be globally unique across all AWS accounts- Use lowercase letters, numbers, hyphens only
- Example:
amigoscode-product-images-2024
-
Configure Bucket Settings
- AWS Region: Select your region (e.g.,
us-east-1- same as your EC2 region) - Object Ownership: Select "ACLs disabled (recommended)"
- Block Public Access:
- ✅ Keep all settings enabled (unless you need public access to images)
- If you want public image access, uncheck "Block all public access" and acknowledge
- Versioning: Disable (unless you need versioning)
- Encryption: Choose "Enable" → "Amazon S3 managed keys (SSE-S3)" (recommended)
- Tags: Optional - add tags for organization
- AWS Region: Select your region (e.g.,
-
Create Bucket
- Click "Create bucket" button at the bottom
- Note down your bucket name - you'll need it for configuration
-
Navigate to IAM
- In AWS Console, search for "IAM" and open the service
- Click "Users" in the left sidebar
- Click "Create user" button
-
Set User Details
- User name:
product-service-s3-user(or your preferred name) - Select credential type: ✅ Check "Provide user access to the AWS Management Console" (optional, for testing)
- Or just check "Access key - Programmatic access" (for API access only)
- Click "Next"
- User name:
-
Set Permissions
- Select "Attach policies directly"
- Click "Create policy" button (opens in new tab)
-
Create Custom Policy
- In the new tab, click "JSON" tab
- Replace the content with:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:HeadObject" ], "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME" } ] }- Replace
YOUR-BUCKET-NAMEwith your actual bucket name (e.g.,amigoscode-product-images-2024) - Click "Next"
- Policy name:
ProductServiceS3Policy - Description:
Allows S3 access for product image service - Click "Create policy"
- Go back to the user creation tab
-
Attach Policy to User
- Refresh the policy list (click refresh icon)
- Search for
ProductServiceS3Policy - ✅ Check the box next to your policy
- Click "Next"
-
Review and Create
- Review the settings
- Click "Create user"
-
Save Access Keys
⚠️ CRITICAL - DO THIS NOW- After creating user, you'll see "Access key" section
- Click "Create access key"
- Use case: Select "Application running outside AWS"
- Click "Next"
- Click "Create access key"
⚠️ IMPORTANT:- Copy the Access Key ID (e.g.,
AKIAIOSFODNN7EXAMPLE) - Copy the Secret Access Key (e.g.,
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) - Download CSV file as backup
- You cannot view the secret key again after closing this page
- Copy the Access Key ID (e.g.,
- Click "Done"
-
Create IAM Role
- In IAM Console, click "Roles" in left sidebar
- Click "Create role"
- Select trusted entity: "AWS service"
- Use case: Select "EC2"
- Click "Next"
-
Attach Permissions
- Search for
ProductServiceS3Policy(the policy you created earlier) - ✅ Check the box
- Click "Next"
- Search for
-
Name Role
- Role name:
EC2-S3-Access-Role - Description:
Allows EC2 instance to access S3 bucket - Click "Create role"
- Role name:
-
Attach Role to EC2 Instance
- Go to EC2 Console → Instances
- Select your EC2 instance
- Click "Actions" → "Security" → "Modify IAM role"
- Select
EC2-S3-Access-Role - Click "Update IAM role"
If you prefer using access keys directly:
-
SSH into your EC2 instance
ssh -i your-key.pem ec2-user@your-ec2-ip
-
Set environment variables (add to
/etc/environmentor your deployment script)sudo nano /etc/environment
Add these lines:
AWS_ACCESS_KEY_ID=your-access-key-id AWS_SECRET_ACCESS_KEY=your-secret-access-key AWS_DEFAULT_REGION=us-east-1 -
Or create a
.envfile in your application directory:nano ~/app/.envAWS_ACCESS_KEY_ID=your-access-key-id AWS_SECRET_ACCESS_KEY=your-secret-access-key AWS_DEFAULT_REGION=us-east-1
Create or update your application.properties for production:
# AWS S3 Configuration (Production)
aws.region=us-east-1
aws.s3.bucket=your-bucket-name-here
aws.s3.endpoint-override=
aws.s3.path-style-enabled=false
aws.access-key-id=${AWS_ACCESS_KEY_ID}
aws.secret-access-key=${AWS_SECRET_ACCESS_KEY}If not using environment variables:
# AWS S3 Configuration (Production)
aws.region=us-east-1
aws.s3.bucket=your-bucket-name-here
aws.s3.endpoint-override=
aws.s3.path-style-enabled=false
aws.access-key-id=AKIAIOSFODNN7EXAMPLE
aws.secret-access-key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYIf using IAM Role (Method 1), you need to update AwsS3Config.java to support IAM roles:
The current code uses StaticCredentialsProvider. If using IAM roles, AWS SDK will automatically use instance credentials. Update the config:
@Bean
public S3Client s3Client() {
S3ClientBuilder builder = S3Client.builder()
.region(Region.of(region))
.serviceConfiguration(
S3Configuration
.builder()
.pathStyleAccessEnabled(pathStyleEnabled)
.build()
);
// Only use static credentials if access key is provided
if (StringUtils.isNotBlank(accessKeyId) && !accessKeyId.equals("minioadmin")) {
builder = builder.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create(accessKeyId, secretAccessKey))
);
}
// Otherwise, AWS SDK will use default credential chain (IAM role, env vars, etc.)
if (StringUtils.isNotBlank(endpointOverride)) {
builder = builder.endpointOverride(URI.create(endpointOverride));
}
return builder.build();
}-
Build your application
mvn clean package
-
Transfer JAR to EC2
scp -i your-key.pem target/product-service.jar ec2-user@your-ec2-ip:~/app/ -
SSH into EC2
ssh -i your-key.pem ec2-user@your-ec2-ip
-
Create application.properties for production
cd ~/app nano application.properties
Add production configuration (see Step 4)
-
Run application
java -jar product-service.jar --spring.config.location=application.properties
Or with environment variables:
export AWS_ACCESS_KEY_ID=your-key-id export AWS_SECRET_ACCESS_KEY=your-secret-key export AWS_DEFAULT_REGION=us-east-1 java -jar product-service.jar
-
Test Image Upload
- Use your application's API to upload a product image
- Go to S3 Console → Your bucket
- Verify the image appears in
products/folder
-
Check Permissions
- If upload fails, verify:
- IAM role/policy is attached correctly
- Bucket name matches configuration
- Region matches your EC2 instance region
- If upload fails, verify:
- ✅ Use IAM Roles instead of access keys when possible (Method 1)
- ✅ Never commit credentials to version control
- ✅ Use environment variables or AWS Secrets Manager for sensitive data
- ✅ Restrict S3 bucket access to specific IPs if needed (via bucket policy)
- ✅ Enable S3 bucket versioning for production (if needed)
- ✅ Enable CloudTrail to audit S3 access (optional but recommended)
- Solution: Verify IAM policy permissions and bucket name
- Check CloudTrail logs for detailed error messages
- Solution: Verify bucket name matches exactly (case-sensitive)
- Ensure bucket is in the same region as configured
- Solution: Remove
aws.s3.endpoint-overrideproperty for production - Set
aws.s3.path-style-enabled=falsefor AWS S3
aws s3 ls s3://your-bucket-name/ --region us-east-1- Set up CloudFront CDN for image delivery (optional)
- Configure S3 lifecycle policies for old images
- Set up monitoring with CloudWatch
- Configure backup strategies