200 Product Photos, One Afternoon
Last month a friend called me in a mild panic. She'd just gotten back from a product photoshoot — 200 images, all straight-out-of-camera JPEGs averaging 8 MB each. Her Shopify store needed WebP files under 500 KB, and the listing had to go live by end of day. "Can you just... do something with these?"
That conversation is basically why I sat down to write this. Batch image compression — compressing a hundred or more images at once without losing your mind — requires a bit of a system. I've landed on a few approaches over the years depending on the situation, and I figure it's worth sharing what actually works.
For Quick Jobs, Just Use a Browser Tool
If I have fewer than 50 images and I don't need anything fancy, I'll drag them into a browser-based tool and call it done. CanYouSmoosh is what I reach for because it runs entirely in the browser — nothing gets uploaded to a server, which matters when you're working with client photos or anything you'd rather not send over the wire.
The workflow is dead simple: drop your files in, pick your format and quality, hit compress, download. No accounts, no installs, no waiting for some server queue.
That said, browser tools have a ceiling. Somewhere around 50-80 images, things start to slow down depending on your machine's RAM and the file sizes you're working with. Your browser tab is doing real work — encoding images is CPU-intensive even with WebAssembly. If you're regularly processing hundreds of images, you'll want something heavier.
When It's Time for the Command Line
I resisted CLI tools for a long time because the browser workflow felt "good enough." Then I got a project with 1,200 images across four directories and realized I needed to actually script this.
The tool I use most is cwebp from Google's WebP utilities. Here's what I actually run:
for f in *.jpg; do cwebp -q 80 "$f" -o "${f%.jpg}.webp"; done
That's it. One line, and it chews through a directory of JPEGs converting them all to WebP at quality 80. If I need to resize them at the same time:
for f in *.jpg; do cwebp -q 80 -resize 1200 0 "$f" -o "${f%.jpg}.webp"; done
The 0 for height means "keep the aspect ratio," which is almost always what you want.
For PNG optimization specifically, oxipng is excellent:
oxipng -o 4 --strip safe *.png
And if I need AVIF output (which I increasingly do — the compression ratios are wild), avifenc from libavif does the job:
for f in *.jpg; do avifenc --min 20 --max 30 "$f" "${f%.jpg}.avif"; done
AVIF encoding is slow, though. Noticeably slower than WebP. On a big batch you'll feel it. I usually kick it off and go make coffee.
The real power of CLI tools is that you can chain them together, put them in a Makefile or a shell script, and run the same thing every time. I have a little script called smoosh.sh in half my projects that handles the resize-and-convert step. Nothing fancy, maybe 15 lines of bash, but it saves me from remembering flags every time.
Build Tool Integration
If you're already in a JavaScript project with a build pipeline, you might not need a separate step at all. Vite, Webpack, and most modern bundlers have image optimization plugins that handle compression at build time.
I've used vite-plugin-imagemin and image-minimizer-webpack-plugin on different projects. They work fine. The upside is that compression happens automatically whenever you build — you literally don't have to think about it. The downside is that your build gets slower, and debugging image quality issues means digging through plugin config.
Honestly, for most projects I prefer keeping image processing as a separate step. Build tools should build your app. Image optimization is a content pipeline concern. But I know plenty of people who disagree, and the "just put it in the build" approach is perfectly valid if it fits your workflow.
The Mistakes I've Made So You Don't Have To
I once compressed originals in place. Not copies. The originals. I ran an optimization script directly on my source directory because I was in a rush and didn't want to deal with an output folder. The compression was lossy. The originals were gone. I noticed the quality issue two weeks later when the client asked for a crop of one of the hero images and I had nothing clean to work from.
Now I never touch source files. Every script I write outputs to a separate directory. Every single one. It takes five extra seconds to set up and it has saved me more than once.
mkdir -p compressed
for f in *.jpg; do cwebp -q 80 "$f" -o "compressed/${f%.jpg}.webp"; done
I used to set quality way too low. Quality 60 sounds reasonable until you actually look at the results on a retina display. (I wrote more about finding the right settings in my guide on how to reduce image size without losing quality.) I've settled on 75-82 for most WebP work and 70-78 for AVIF (AVIF holds up better at lower quality numbers). But these are starting points — I always spot-check a few images before running the full batch, especially if the content varies. A photo of text needs higher quality than a lifestyle shot with soft backgrounds.
I didn't think about naming conventions. This bit me on a project where the CMS expected specific filenames. My script was outputting photo-001.webp but the CMS wanted photo-001.jpg.webp or something equally dumb. Now I plan the output naming before I start, especially if the files are going into a system that cares about what they're called.
What I'd Actually Recommend
If you're compressing images a few times a month, a browser tool like CanYouSmoosh is genuinely all you need. It handles the common formats, runs locally so your files stay private, and there's nothing to install or configure.
If you're doing this regularly — weekly blog posts with lots of images, e-commerce catalog updates, anything where "batch of images" is a recurring task — learn the CLI tools. Spend an hour setting up a script that does exactly what you need, and then you'll have it forever. The initial setup cost pays for itself after the third or fourth time you use it.
If you're on a team and consistency matters, bake it into your build or CI pipeline. Have the optimization happen automatically so nobody has to remember to do it. A GitHub Action that runs cwebp on any new images in a PR is a surprisingly effective setup.
The one thing I'd push back on is the idea that you need to pick just one approach. I use all of these depending on the situation. Five images for a blog post? Browser tool, thirty seconds, done. New batch of product photos? Shell script. Ongoing project with a team? Build pipeline. The "best" tool is whichever one gets the job done without making you think about image compression more than you have to.
If you're doing this for a website specifically, my post on image optimization for web performance covers how to make sure your compressed images actually improve your Core Web Vitals scores.
Because honestly, nobody wakes up excited to optimize images. You just want them small, sharp, and done — so you can get back to the actual work.