Amazon S3 est la colonne vertébrale du stockage de fichiers pour des millions d'applications. Ce guide couvre le SDK S3, les URLs présignées, les politiques de bucket et l'intégration avec CloudFront.
Bases S3 : buckets et objets
S3 stocke les données en tant qu'objets dans des buckets.
S3 Core Concepts:
Bucket — Top-level container (globally unique name, tied to a region)
Object — A file stored in a bucket (up to 5TB per object)
Key — The object's path/name: "uploads/2026/user-123/photo.jpg"
Prefix — Virtual folder: "uploads/2026/" (S3 has no real folders)
Region — Where the bucket lives: us-east-1, eu-west-1, ap-southeast-1
Storage Classes:
Standard — Frequently accessed data, 99.99% availability
Standard-IA — Infrequent access, lower cost, retrieval fee
One Zone-IA — Lower cost, single AZ (no cross-AZ replication)
Intelligent-Tiering — Auto-move between tiers based on access patterns
Glacier Instant — Archive with millisecond retrieval
Glacier Flexible — Archive with minutes-to-hours retrieval
Glacier Deep Archive — Cheapest storage, 12-48h retrievalUpload avec le SDK AWS v3
Le SDK AWS v3 utilise une conception modulaire pour réduire la taille du bundle.
// AWS SDK v3 — File Upload to S3
import {
S3Client,
PutObjectCommand,
GetObjectCommand,
DeleteObjectCommand,
ListObjectsV2Command,
} from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage'; // For multipart upload
import { fromEnv } from '@aws-sdk/credential-providers';
const s3 = new S3Client({
region: process.env.AWS_REGION || 'us-east-1',
credentials: fromEnv(), // Reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
});
const BUCKET = process.env.S3_BUCKET_NAME!;
// 1. Simple upload (files < 5MB)
async function uploadFile(key: string, file: Buffer, contentType: string) {
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: key, // e.g., 'uploads/2026/user-123/avatar.jpg'
Body: file,
ContentType: contentType,
// Optional: set cache headers
CacheControl: 'max-age=31536000',
// Optional: make publicly readable
// ACL: 'public-read',
// Optional: custom metadata
Metadata: {
'uploaded-by': 'server',
'original-name': 'avatar.jpg',
},
});
const result = await s3.send(command);
return {
url: `https://${BUCKET}.s3.amazonaws.com/${key}`,
etag: result.ETag,
versionId: result.VersionId,
};
}
// 2. Multipart upload for large files (recommended for > 100MB)
async function uploadLargeFile(key: string, stream: NodeJS.ReadableStream, contentType: string) {
const upload = new Upload({
client: s3,
params: {
Bucket: BUCKET,
Key: key,
Body: stream,
ContentType: contentType,
},
partSize: 10 * 1024 * 1024, // 10 MB parts
queueSize: 4, // 4 concurrent uploads
});
upload.on('httpUploadProgress', (progress) => {
console.log(`Uploaded: ${progress.loaded}/${progress.total} bytes`);
});
return upload.done();
}URLs présignées pour les uploads directs sécurisés
Les URLs présignées permettent aux clients d'uploader directement sur S3 sans passer par votre serveur.
// Presigned URLs — Direct Browser-to-S3 Upload
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { randomUUID } from 'crypto';
// Generate presigned upload URL (browser uploads directly to S3)
async function generateUploadUrl(
fileName: string,
fileType: string,
userId: string
) {
const key = `uploads/${userId}/${randomUUID()}-${fileName}`;
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: key,
ContentType: fileType,
// Optional: limit file size with Content-Length condition
// This must be enforced server-side with a policy
});
const signedUrl = await getSignedUrl(s3, command, {
expiresIn: 15 * 60, // 15 minutes
});
return {
uploadUrl: signedUrl,
key, // Return key so client can reference the uploaded file
expiresIn: 900, // seconds
};
}
// Express.js endpoint
app.post('/api/upload-url', authenticate, async (req, res) => {
const { fileName, fileType, fileSize } = req.body;
// Validate file type and size on server
const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (!allowedTypes.includes(fileType)) {
return res.status(400).json({ error: 'File type not allowed' });
}
if (fileSize > 10 * 1024 * 1024) { // 10 MB limit
return res.status(400).json({ error: 'File too large' });
}
const { uploadUrl, key } = await generateUploadUrl(fileName, fileType, req.user.id);
res.json({ uploadUrl, key });
});
// Client-side: use the presigned URL to upload
async function uploadToS3(file, presignedUrl) {
const response = await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
},
});
if (!response.ok) throw new Error('Upload failed');
return response;
}Intégration CloudFront CDN
CloudFront est le CDN d'AWS qui met en cache le contenu S3 dans des emplacements de périphérie mondiaux.
// CloudFront + S3 Setup
// 1. Bucket policy to allow CloudFront (Origin Access Control)
const bucketPolicy = {
Version: '2012-10-17',
Statement: [
{
Sid: 'AllowCloudFrontServicePrincipal',
Effect: 'Allow',
Principal: {
Service: 'cloudfront.amazonaws.com',
},
Action: 's3:GetObject',
Resource: `arn:aws:s3:::my-bucket/*`,
Condition: {
StringEquals: {
'AWS:SourceArn': 'arn:aws:cloudfront::123456789:distribution/ABCDEF123456',
},
},
},
],
};
// 2. Generate signed URLs for private CloudFront content
import { getSignedUrl } from '@aws-sdk/cloudfront-signer';
function generateCloudFrontSignedUrl(key: string, expirySeconds = 3600) {
const url = `https://${process.env.CLOUDFRONT_DOMAIN}/${key}`;
const expiryDate = new Date();
expiryDate.setSeconds(expiryDate.getSeconds() + expirySeconds);
return getSignedUrl({
url,
keyPairId: process.env.CLOUDFRONT_KEY_PAIR_ID!,
privateKey: process.env.CLOUDFRONT_PRIVATE_KEY!,
dateLessThan: expiryDate.toISOString(),
});
}
// 3. Invalidate CloudFront cache when S3 objects change
import { CloudFrontClient, CreateInvalidationCommand } from '@aws-sdk/client-cloudfront';
const cloudfront = new CloudFrontClient({ region: 'us-east-1' });
async function invalidateCache(paths: string[]) {
const command = new CreateInvalidationCommand({
DistributionId: process.env.CLOUDFRONT_DISTRIBUTION_ID!,
InvalidationBatch: {
CallerReference: Date.now().toString(),
Paths: {
Quantity: paths.length,
Items: paths.map(p => `/${p}`),
},
},
});
return cloudfront.send(command);
}
// Usage: invalidate a specific file after update
await invalidateCache(['uploads/profile-pictures/user-123.jpg']);
// Wildcard: invalidate all files in a folder
await invalidateCache(['uploads/*']);Questions fréquentes
Dois-je autoriser l'accès public à mon bucket S3 ?
Seulement pour l'hébergement de sites web statiques ou les ressources vraiment publiques. Gardez le bucket privé et utilisez des URLs présignées.
Quelle durée de validité pour les URLs présignées ?
15-30 minutes pour les URLs d'upload. La durée maximale est de 7 jours.
Comment réduire les coûts de stockage S3 ?
Utilisez S3 Intelligent-Tiering et les règles de cycle de vie pour déplacer ou supprimer automatiquement les anciens objets.
Qu'est-ce que le multipart upload ?
Le multipart upload divise les gros fichiers en parties uploadées en parallèle. Recommandé pour les fichiers de plus de 100 Mo.