The short answer is: stuff I wrote.
The longer answer, which you might not have really been looking for, but I'll give you anyway:
I used "Gordian Knot" to convert the DVD movies to AVI files. This software could conceivably be used for pirating movies, but I'm pretty sure that I'm on solid "fair use" ground in using it.
I then used a file I wrote to extract one frame a second from the AVI files and save it out as BMP files. This is using Microsoft's "AVIFile" API, which is pretty good, but it's really poorly documented (again, it's Microsoft), and if you attempt to seek to locations past 90 minutes into the movie, you get uninformative error values with no hint on how to fix the problem.
The workaround I discovered for this problem was to re-encode the movie AVI file (weighing in at around 2 hours) to a pair of smaller AVIs, splitting at the 1 hour mark. I think the split was a couple seconds noisy, so I got a few duplicate frames of River in a dream state. That's OK, you probably didn't notice. The tool I used for the re-encoding process was FFMPEG, which I spent about a week trying to adapt to do the single-frame capture before I gave up and used AVIFile. That's another story, and not a very interesting one.
So, at this point, I've got something over 40,000 bitmaps at 640x352 (that's a 16:9 aspect ratio, which the TV episodes were aired in), but also some movie frames that are letterboxed - because they're still encoded at 16:9, but the movie was at 2.35:1, so the BMPs from the movie had black bars on top and bottom, which is what you think of when I said "letterboxed".
So, the next step in the chain was a short Python script using the Python Image Library, to crop the movie frames to get rid of the black bars. This ended up throwing away the sides of each movie frame, and I'm still disappointed about this. I've noodled with other approaches, which involve heterogeneous-sized-tiles, but that means rewriting the remainder of the tools.
Now I have over 40,000 candidate tiles, all the right shape and size. Er, size isn't really all that important - I fix up sizes later on. But the aspect ratios are all good. So I go through that list, and catalog each BMP - essentially making a database of very low resolution versions of each BMP.
Then, I open up my "macro" image - the large scale target that I'm assembling the small tiles to resemble. I resize this in memory, and then take small chunks of it (in this case, 5 pixel by 3 pixel rectangles), which I then query the database for. The database gives me a list of candidate tiles, and I pick one. I do this for the entire image, 5x3 pixels at a time, and eventually, I get something that looks kind of like I started with.
And then I use the original tile artwork (not the 5 pixel by 3 pixel version) to assemble a high-resolution version, suitable for printing. And then I open that into Photoshop or Paint Shop Pro to resample down to web-appropriate resolutions. The resizing for the web is an afterthought, mostly. Which explains why it's done using something as pricy as Photoshop. I could certainly resize the images down to web-appropriate sizes as a post-process, but it's comparatively rare that I need that step.
There are more details that you might be interested in, or maybe not. Bug me more if I haven't yet bored you to death or even just tears.