Replace Supabase Storage with
Backblaze B2
Supabase Storage and Vercel Blob are just S3-compatible object storage with a premium markup. Backblaze B2 does the exact same thing at $0.006/GB — and when paired with Cloudflare, egress is completely free. Your code barely changes.
Storage pricing comparison
Note: applicationKey is shown only once
When you create a B2 Application Key, copy the applicationKey value immediately. It's not stored by Backblaze and cannot be recovered. If you miss it, just delete the key and create a new one.
Create a Backblaze account + B2 bucket
Backblaze B2 gives you 10GB free, then $0.006/GB — about 10x cheaper than AWS S3. Setup takes 5 minutes.
- Go to backblaze.com → sign up for a free account
- Go to B2 Cloud Storage → Buckets → Create a Bucket
- Bucket name: something-unique (e.g. myapp-uploads-2024) — globally unique, like S3
- Files in Bucket: Private (recommended) or Public (for public CDN use)
- Default Encryption: off is fine for most apps
- Click Create a Bucket
- Copy your Bucket Name and the Endpoint URL shown on the bucket page (e.g. s3.us-west-004.backblazeb2.com)
# Your bucket details (save these): BUCKET_NAME=myapp-uploads-2024 B2_ENDPOINT=s3.us-west-004.backblazeb2.com # Full S3-compatible endpoint: # https://s3.us-west-004.backblazeb2.com
I just created a Backblaze B2 bucket for my Next.js app. Bucket name: [YOUR_BUCKET]. Endpoint: s3.us-west-004.backblazeb2.com. Help me understand: 1) What's the difference between a private vs public bucket and which I should use for user-uploaded profile photos, 2) How CORS works on B2 and whether I need to configure it for browser uploads, 3) The difference between the native B2 API and the S3-compatible API.
Generate Application Keys (API credentials)
B2 uses Application Keys — like AWS access keys. You create a key scoped to your bucket so a leak can't touch your other data.
- Go to Backblaze → Account → App Keys → Add a New Application Key
- Name it: myapp-server (or whatever makes sense)
- Allow access to: select your specific bucket (not all buckets)
- Type of access: Read and Write
- File name prefix: leave empty (or set a prefix like uploads/ if you want)
- Click Create New Key
- IMMEDIATELY copy: keyID and applicationKey — applicationKey is shown only once
- If you miss it, delete the key and create a new one — it cannot be recovered
# Add to .env.local: B2_KEY_ID=your-key-id-here B2_APP_KEY=your-application-key-here B2_BUCKET_NAME=myapp-uploads-2024 B2_ENDPOINT=https://s3.us-west-004.backblazeb2.com B2_REGION=us-west-004 # must match your endpoint region
I have a Backblaze B2 Application Key: keyID=[YOUR_KEY_ID] and applicationKey=[YOUR_APP_KEY]. The bucket is at endpoint s3.us-west-004.backblazeb2.com. Help me: 1) Add these to my .env.local file in the correct format for the @aws-sdk/client-s3 package, 2) Create a lib/storage.ts file with uploadFile(key, buffer, contentType), getSignedUrl(key, expiresIn), and deleteFile(key) functions using the AWS SDK v3 pointed at B2.
Connect with the AWS SDK (S3-compatible)
B2's S3-compatible API means you use the standard AWS SDK — just point it at B2's endpoint. No Backblaze-specific SDK needed.
- Install: npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
- Create lib/storage.ts — configure S3Client with B2 credentials and endpoint
- Use PutObjectCommand to upload files
- Use GetObjectCommand + getSignedUrl for time-limited download links
- Use DeleteObjectCommand to remove files
- The API is identical to AWS S3 — if you find an S3 example online, it works on B2
// lib/storage.ts
import { S3Client, PutObjectCommand, DeleteObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
const s3 = new S3Client({
region: process.env.B2_REGION!,
endpoint: process.env.B2_ENDPOINT!,
credentials: {
accessKeyId: process.env.B2_KEY_ID!,
secretAccessKey: process.env.B2_APP_KEY!,
},
})
export async function uploadFile(key: string, buffer: Buffer, contentType: string) {
await s3.send(new PutObjectCommand({
Bucket: process.env.B2_BUCKET_NAME!,
Key: key,
Body: buffer,
ContentType: contentType,
}))
return `${process.env.B2_ENDPOINT}/${process.env.B2_BUCKET_NAME}/${key}`
}
export async function getPresignedUploadUrl(key: string, contentType: string) {
return getSignedUrl(s3, new PutObjectCommand({
Bucket: process.env.B2_BUCKET_NAME!,
Key: key,
ContentType: contentType,
}), { expiresIn: 3600 })
}
export async function getPresignedDownloadUrl(key: string) {
return getSignedUrl(s3, new GetObjectCommand({
Bucket: process.env.B2_BUCKET_NAME!,
Key: key,
}), { expiresIn: 3600 })
}
export async function deleteFile(key: string) {
await s3.send(new DeleteObjectCommand({
Bucket: process.env.B2_BUCKET_NAME!,
Key: key,
}))
}Create a complete lib/storage.ts file for my Next.js app that uses Backblaze B2 via the AWS SDK v3 S3-compatible API. I need: 1) An S3Client configured with B2_KEY_ID, B2_APP_KEY, B2_ENDPOINT, B2_REGION env vars 2) uploadFile(key: string, buffer: Buffer, contentType: string): Promise<string> — returns the public URL 3) getPresignedUploadUrl(key: string, contentType: string, expiresIn = 3600): Promise<string> — for direct browser uploads without going through my server 4) getPresignedDownloadUrl(key: string, expiresIn = 3600): Promise<string> — for private file access 5) deleteFile(key: string): Promise<void> Also create an API route /api/upload/presign that returns a presigned upload URL for authenticated users.
Make files fast with Cloudflare (free egress)
B2 egress is free when served through Cloudflare — because Backblaze and Cloudflare are part of the Bandwidth Alliance. This makes B2 effectively free for read-heavy apps.
- If your domain DNS is already on Cloudflare: you're most of the way there
- Go to Backblaze bucket → Bucket Settings → enable "Cloudflare CDN compatible" mode
- In Cloudflare: add a CNAME record — files.yourdomain.com → your B2 bucket endpoint
- Enable Cloudflare proxy (orange cloud) on that CNAME — this routes traffic through Cloudflare
- Now file URLs use your domain: https://files.yourdomain.com/uploads/photo.jpg
- Cloudflare caches the files at edge — faster for users AND no B2 egress charges
- For private files: skip Cloudflare CDN, use presigned URLs instead (they bypass CDN)
# Cloudflare DNS record: Type: CNAME Name: files Target: myapp-uploads-2024.s3.us-west-004.backblazeb2.com Proxy: ✅ (orange cloud — this enables free egress) # Your public file URLs become: https://files.yourdomain.com/uploads/photo.jpg # Update .env.local: B2_PUBLIC_URL=https://files.yourdomain.com
I want to serve my Backblaze B2 files through Cloudflare to get free egress and CDN caching. My B2 bucket endpoint is s3.us-west-004.backblazeb2.com and my bucket name is myapp-uploads-2024. My domain is yourdomain.com and it's already on Cloudflare. Help me: 1) Configure the Cloudflare CNAME for files.yourdomain.com, 2) Set up the correct Cloudflare settings (cache rules, SSL mode), 3) Update my lib/storage.ts to return files.yourdomain.com URLs instead of the B2 endpoint URL for public files.
Replace Supabase Storage or Vercel Blob calls
If you're migrating from Supabase Storage or Vercel Blob, your upload code barely changes. The API patterns are almost identical.
- Supabase: supabase.storage.from("bucket").upload(path, file) → uploadFile(key, buffer, contentType)
- Supabase: supabase.storage.from("bucket").getPublicUrl(path) → your B2 public URL
- Vercel Blob: put(filename, file) → uploadFile(key, buffer, contentType)
- Vercel Blob: url property → your B2 public URL or presigned URL
- For browser uploads: use presigned URLs instead of routing through your server (faster + cheaper)
- Migration of existing files: use rclone to copy from Supabase/S3 to B2 in one command
# Migrate existing files from Supabase Storage to B2 using rclone: # Install: brew install rclone # Configure Supabase as source (S3-compatible): rclone config create supabase s3 \ provider=Other \ endpoint=https://[project].supabase.co/storage/v1/s3 \ access_key_id=[your-key] \ secret_access_key=[your-secret] # Configure B2: rclone config create b2 b2 \ account=[B2_KEY_ID] \ key=[B2_APP_KEY] # Copy everything: rclone copy supabase:your-bucket b2:myapp-uploads-2024 --progress
I'm migrating file uploads from [Supabase Storage / Vercel Blob] to Backblaze B2. My current upload code is: [PASTE YOUR CURRENT UPLOAD CODE HERE] Replace it with: 1) A server-side API route that returns a presigned upload URL, 2) A client-side component that fetches the presigned URL, uploads directly to B2 (no server bandwidth cost), 3) Saves the file key/URL to my database after upload. Also show me the rclone command to migrate all existing files from my old storage to B2.
Storage sorted.
10GB free, then $0.006/GB, free egress through Cloudflare. Your files, your bucket, no vendor controlling your costs.