S3 Upload with Presigned URL and Lambda Processing in Node.js with TypeScript
Learn S3 upload with presigned URL step by step using Lambda and React. Implement secure flow, correct binary upload, and post-upload processing.
Flow Overview
The upload follows a 3-step flow:
┌─────────────────────────────────────────────────────────────┐
│ │
│ 1. Client requests URL │
│ ───────────────── │
│ Client → API → S3 (generates presigned URL) │
│ ← URL returned to client │
│ │
│ 2. Client uploads │
│ ──────────────────── │
│ Client → S3 (direct upload using the URL) │
│ │
│ 3. S3 notifies Lambda │
│ ───────────────────── │
│ S3 → Lambda (ObjectCreated event) │
│ └─→ Processes file (thumbnail, validation, etc.) │
│ │
└─────────────────────────────────────────────────────────────┘
Why this flow?
- Security: The client does not need AWS credentials
- Performance: Upload goes directly to S3, not through your server
- Cost: Your Lambda does not waste time/memory handling uploads
Step 1: Generating a Presigned URL
What is a Presigned URL?
It is a temporary URL that grants permission to perform a specific action on S3 (upload or download) without requiring credentials.
Normal URL:
https://bucket.s3.amazonaws.com/file.jpg
→ Access denied (credentials required)
Presigned URL:
https://bucket.s3.amazonaws.com/file.jpg?X-Amz-Signature=abc123...
→ Works! (valid signature for a limited time)
Generating an Upload URL
// src/infra/services/s3-storage.service.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
export class S3StorageService {
private readonly s3 = new S3Client({});
private readonly bucketName = process.env.MEDIA_BUCKET_NAME;
async generateUploadUrl(params: {
key: string;
contentType: string;
expiresIn?: number;
}): Promise<{ url: string; expiresIn: number }> {
const expiresIn = params.expiresIn ?? 300; // 5 minutes
const command = new PutObjectCommand({
Bucket: this.bucketName,
Key: params.key,
ContentType: params.contentType
});
const url = await getSignedUrl(this.s3, command, { expiresIn });
return { url, expiresIn };
}
}
Upload URL Handler
// src/infra/http/handlers/generate-upload-url.ts
export async function handler(event: APIGatewayProxyEventV2) {
// 1. Validate input
const { fileName, fileSize, contentType } = JSON.parse(event.body);
// 2. Generate identifiers
const userId = event.requestContext.authorizer.lambda.sub;
const fileId = crypto.randomUUID();
const extension = fileName.split(".").pop();
const key = `media/${userId}/${fileId}.${extension}`;
// 3. Save metadata in database
const media = Media.create({
ownerId: new UniqueEntityId(userId),
fileName,
fileSize,
contentType,
s3Key: key,
status: "uploading"
});
await mediaRepository.save(media);
// 4. Generate presigned URL
const { url, expiresIn } = await storageService.generateUploadUrl({
key,
contentType
});
// 5. Return URL to client
return {
statusCode: 201,
body: JSON.stringify({
uploadUrl: url,
fileId: fileId,
expiresIn
})
};
}
Step 2: Client Upload
IMPORTANT: How to upload correctly
The client must send the file as raw binary, NOT as form-data.
┌─────────────────────────────────────────────────────────────┐
│ │
│ WRONG: multipart/form-data │
│ ────────────────────────── │
│ S3 will save the form envelope, not just the file │
│ │
│ Saved content: │
│ ------WebKitFormBoundary │
│ Content-Disposition: form-data; name="file" │
│ ... file data ... │
│ ------WebKitFormBoundary-- │
│ │
│ CORRECT: binary │
│ ─────────────────── │
│ S3 saves exactly the file bytes │
│ │
│ Saved content: │
│ [raw image bytes] │
│ │
└─────────────────────────────────────────────────────────────┘
Example: JavaScript/React
// WRONG - DO NOT DO THIS
const formData = new FormData();
formData.append("file", file);
await fetch(presignedUrl, {
method: "PUT",
body: formData // This sends multipart!
});
// CORRECT
await fetch(presignedUrl, {
method: "PUT",
body: file, // File object directly
headers: {
"Content-Type": file.type // image/jpeg, image/png, etc.
}
});
Complete React Component Example
function UploadComponent() {
const [file, setFile] = useState(null);
const [uploading, setUploading] = useState(false);
async function handleUpload() {
if (!file) return;
setUploading(true);
try {
// 1. Request presigned URL from backend
const response = await fetch("/api/uploads/presign", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`
},
body: JSON.stringify({
fileName: file.name,
fileSize: file.size,
contentType: file.type
})
});
const { uploadUrl, fileId } = await response.json();
// 2. Upload directly to S3
await fetch(uploadUrl, {
method: "PUT",
body: file, // Direct file upload, no FormData!
headers: {
"Content-Type": file.type
}
});
console.log("Upload completed! File ID:", fileId);
} catch (error) {
console.error("Upload error:", error);
} finally {
setUploading(false);
}
}
return (
<div>
<input
type="file"
accept="image/*"
onChange={e => setFile(e.target.files[0])}
/>
<button onClick={handleUpload} disabled={uploading}>
{uploading ? "Uploading..." : "Upload"}
</button>
</div>
);
}
Example: cURL
# First, get the presigned URL
curl -X POST https://api.example.com/uploads/presign \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"fileName":"photo.jpg","fileSize":12345,"contentType":"image/jpeg"}'
# Response: {"uploadUrl":"https://bucket.s3...","fileId":"abc123"}
# Then, upload the file
curl -X PUT "https://bucket.s3.amazonaws.com/media/user/abc123.jpg?X-Amz-..." \
-H "Content-Type: image/jpeg" \
--data-binary @photo.jpg
How to identify incorrect uploads
If you get an "unsupported image format" error during processing, check the first bytes of the file:
// In the processing Lambda
console.log("First bytes:", buffer.subarray(0, 16).toString("hex"));
| Format | First bytes (hex) |
|---|---|
| PNG | 89504e47 (89 P N G) |
| JPEG | ffd8ff |
| WebP | 52494646 (RIFF) |
| Form-data (WRONG) | 2d2d2d2d (----) |
If you see 2d2d2d2d, the client is sending multipart form-data.
Step 3: Processing After Upload
Configuring S3 Trigger
# serverless/functions.yml
functions:
processUpload:
handler: src/handlers/process-upload.handler
timeout: 30
memorySize: 512
layers:
- !Ref SharpLambdaLayer
events:
- s3:
bucket: ${self:provider.environment.MEDIA_BUCKET_NAME}
event: s3:ObjectCreated:*
existing: true # Bucket already exists
S3 Event
When a file is created in S3, Lambda receives an event like this:
interface S3Event {
Records: [
{
s3: {
bucket: {
name: "my-bucket";
};
object: {
key: "media/user-123/file-456.jpg";
size: 12345;
};
};
}
];
}
Processing Handler
// src/infra/http/handlers/process-upload.ts
import type { S3Event, Context } from "aws-lambda";
export async function handler(event: S3Event, context: Context) {
const key = event.Records[0].s3.object.key;
console.log(`Processing: ${key}`);
// 1. Fetch metadata from database
const media = await mediaRepository.findByS3Key(key);
if (!media) {
console.error("Media not found in database");
return;
}
// 2. Download file from S3
const originalFile = await storageService.getObject(key);
// 3. Process (generate thumbnail)
const thumbnail =
await imageProcessingService.generateThumbnail(originalFile);
// 4. Save thumbnail
const thumbnailKey = `thumbnails/${media.ownerId}/${media.id}.jpg`;
await storageService.putObject({
key: thumbnailKey,
body: thumbnail,
contentType: "image/jpeg"
});
// 5. Update status in database
media.status = "ready";
media.thumbnail = thumbnailKey;
await mediaRepository.save(media);
return { statusCode: 200 };
}
Full Flow Visual
┌─────────────────────────────────────────────────────────────┐
│ │
│ Client API S3 │
│ │ │ │ │
│ │ POST /presign │ │ │
│ │ {fileName, size} │ │ │
│ │───────────────────>│ │ │
│ │ │ │ │
│ │ │ Generates │ │
│ │ │ presigned URL │ │
│ │ │ │ │
│ │ │ Saves metadata │ │
│ │ │ (status:uploading) │
│ │ │ │ │
│ │ {uploadUrl, fileId}│ │ │
│ │<───────────────────│ │ │
│ │ │ │ │
│ │ PUT (binary) │ │ │
│ │──────────────────────────────────────>│ │
│ │ │ │ │
│ │ 200 OK │ │ │
│ │<──────────────────────────────────────│ │
│ │ │ │ │
│ │ │ │ S3 Event │
│ │ │ │───────┐ │
│ │ │ │ │ │
│ │ │ │ ▼ │
│ │ │ ┌───────────────┐ │
│ │ │ │ processUpload │ │
│ │ │ │ Lambda │ │
│ │ │ └───────────────┘ │
│ │ │ │ │
│ │ │ Updates status │ │
│ │ │ (status:ready) │ │
│ │ │ │ │
└─────────────────────────────────────────────────────────────┘
Generating a Download URL
To download private files, generate a presigned download URL:
async generateDownloadUrl(key: string): Promise<{ url: string }> {
const command = new GetObjectCommand({
Bucket: this.bucketName,
Key: key
});
const url = await getSignedUrl(this.s3, command, { expiresIn: 3600 });
return { url };
}
Security Tips
1. Validate content type
const allowedTypes = ["image/jpeg", "image/png", "image/webp"];
if (!allowedTypes.includes(contentType)) {
throw new ValidationError("File type not allowed");
}
2. Limit file size
const maxSize = 10 * 1024 * 1024; // 10MB
if (fileSize > maxSize) {
throw new ValidationError("File too large");
}
3. Use unique paths
// Prevent collisions and guessing
const key = `media/${userId}/${uuid()}.${extension}`;
4. Expire URLs quickly
// 5 minutes is enough for upload
const expiresIn = 300;
Troubleshooting
"Input buffer contains unsupported image format"
The file is not a valid image. Common causes:
- Client sent as form-data (see "How to upload correctly" section)
- Corrupted file
- Unsupported format
"Access Denied" on upload
- Check if the presigned URL has expired
- Verify that Content-Type matches the one used to generate the URL
- Check bucket permissions (CORS)
Lambda is not triggered
- Verify S3 trigger configuration
- Check correct bucket
- Verify permissions (Lambda must have access to the bucket)
CORS for upload
Configure CORS on the bucket:
# serverless/resources.yml
MediaBucket:
Type: AWS::S3::Bucket
Properties:
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
- PUT
AllowedOrigins:
- "*" # In production, specify domains
MaxAge: 3000
Summary
- Presigned URL grants temporary permission for upload/download
- Client requests URL from backend and uploads directly to S3
- Upload must be raw binary, NOT multipart form-data
- S3 event triggers Lambda for processing
- Lambda downloads, processes, and updates status in database
- Always validate file type and size
- Use unique paths and short expiration times