Early-stage SaaS projects often store files on the local filesystem. It's simple — fs.writeFile() and you're done. But as the platform grows, you need CDN integration, redundancy, and the ability to scale beyond a single server. We built a storage abstraction layer that supports both local filesystem and AWS S3, switchable via a single environment variable.
THE PROBLEM: HARDCODED fs.writeFile
Before the abstraction, file operations were scattered across the codebase with direct filesystem calls:
// Blog media upload
const filepath = path.join(process.cwd(), "public/uploads/blog", filename);
await fs.writeFile(filepath, buffer);
// Avatar upload (different file, same pattern)
const avatarPath = path.join(process.cwd(), "public/uploads/avatars", filename);
await fs.writeFile(avatarPath, buffer);
// Delete
await fs.unlink(filepath);
Migrating to S3 would require finding and modifying every file that touches uploads. Instead of doing that, we built a layer that abstracts the storage provider.
THE STORAGE API
Five functions cover all storage operations:
// Upload a buffer to storage
export async function uploadFile(
key: string, // e.g. "uploads/blog/file.jpg"
buffer: Buffer,
contentType?: string
): Promise<string>; // Returns public URL
// Delete a file
export async function deleteFile(key: string): Promise<void>;
// Check if a file exists
export async function fileExists(key: string): Promise<boolean>;
// List files in a directory/prefix
export async function listFiles(
prefix: string,
allowedExtensions: string[]
): Promise<StoredFile[]>;
// Get public URL for a storage key
export function getPublicUrl(key: string): string;
Every function works with storage keys — path-like strings such as "uploads/blog/image.jpg". The key is the same regardless of whether the file is on local disk or S3.
PROVIDER SWITCHING
A single function determines which provider to use:
const STORAGE_PROVIDER = process.env.STORAGE_PROVIDER || "local";
const S3_BUCKET = process.env.S3_BUCKET || "";
const S3_ACCESS_KEY_ID = process.env.S3_ACCESS_KEY_ID || "";
function isS3(): boolean {
return STORAGE_PROVIDER === "s3" && !!S3_BUCKET && !!S3_ACCESS_KEY_ID;
}
Every storage function checks isS3() and branches accordingly:
export async function uploadFile(key: string, buffer: Buffer, contentType?: string) {
if (isS3()) {
await s3Client.send(new PutObjectCommand({
Bucket: S3_BUCKET,
Key: key,
Body: buffer,
ContentType: contentType,
CacheControl: "public, max-age=31536000, immutable",
}));
return getPublicUrl(key);
}
// Local filesystem
const filepath = path.join(PUBLIC_DIR, key);
await mkdir(path.dirname(filepath), { recursive: true });
await writeFile(filepath, buffer);
return `/${key}`;
}
S3 CLIENT SINGLETON
The S3 client is created once and reused across all requests. This avoids the overhead of creating a new client (and establishing new TCP connections) for every operation:
let _s3Client: S3Client | null = null;
function getS3(): S3Client {
if (!_s3Client) {
_s3Client = new S3Client({
region: S3_REGION,
credentials: {
accessKeyId: S3_ACCESS_KEY_ID,
secretAccessKey: S3_SECRET_ACCESS_KEY,
},
...(S3_ENDPOINT ? { endpoint: S3_ENDPOINT } : {}),
forcePathStyle: S3_FORCE_PATH_STYLE,
});
}
return _s3Client;
}
The S3_ENDPOINT and forcePathStyle options support S3-compatible providers (MinIO, DigitalOcean Spaces, Backblaze B2) — making the abstraction provider-agnostic beyond just AWS.
URL TO KEY CONVERSION
When the application needs to delete a file referenced by its public URL, we need to convert the URL back to a storage key:
export function urlToKey(url: string): string {
if (!url) return "";
// Local URL: "/uploads/blog/file.jpg" → "uploads/blog/file.jpg"
if (url.startsWith("/")) return url.slice(1);
// CloudFront URL: strip domain prefix
if (CLOUDFRONT_DOMAIN && url.includes(CLOUDFRONT_DOMAIN)) {
return url.replace(`https://${CLOUDFRONT_DOMAIN}/`, "");
}
// S3 URL: extract key from full URL
if (url.includes(S3_BUCKET)) {
const pattern = new RegExp(`https?://(?:${S3_BUCKET}\\.s3[^/]*/|[^/]+/${S3_BUCKET}/)`);
return url.replace(pattern, "");
}
return url;
}
CONTENT TYPE HELPER
S3 requires explicit content types. We maintain a simple extension-to-MIME mapping:
const MIME_MAP: Record<string, string> = {
jpg: "image/jpeg", jpeg: "image/jpeg",
png: "image/png", gif: "image/gif",
webp: "image/webp", svg: "image/svg+xml",
ico: "image/x-icon",
};
export function extensionToMime(ext: string): string {
return MIME_MAP[ext.toLowerCase()] || "application/octet-stream";
}
LOCAL DEVELOPMENT, PRODUCTION S3
The key benefit: developers run with STORAGE_PROVIDER=local (or just omit it — local is the default). Production sets STORAGE_PROVIDER=s3 with credentials. Application code is identical in both environments.
# .env.local (development)
# STORAGE_PROVIDER not set → defaults to "local"
# .env (production)
STORAGE_PROVIDER=s3
S3_BUCKET=aws-boottify
S3_REGION=eu-central-1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=...
CLOUDFRONT_DOMAIN=cdn.boottify.com
THE RESULT
- 5 functions cover all storage operations
- Zero code changes when switching providers
- S3-compatible provider support — not locked to AWS
- Singleton S3 client — no connection overhead per request
- Bidirectional URL/key mapping — works with local, S3, and CloudFront URLs
The abstraction is ~280 lines of TypeScript. It's not a framework — it's a thin layer that keeps infrastructure concerns out of application logic. When we added CloudFront CDN support later, we modified one function (getPublicUrl) and every upload in the system immediately served from the CDN.



