No comments yet.
So I built bother over the course of mostly a weekend. I first used Sharp with Node and had great results, but I couldn't get sharp working on the browser. I wasn't very happy with other existing libraries, so I figured I could use the canvas directly, and it worked out great.
Although all of it is mostly basic, I did have fun figuring out the math required to get the canvas to scale from a specific origin, so that pinch-to-zoom and scroll wheel zoom felt natural. I have the basic idea diagrammed [1] and plan to write about it.
I wrote a small helper [2] that I might release standalone as a library after expanding the feature set.
I plan to add a feature to support slicing. I have polaroid pictures I want to scan, and phone apps don't work great. They also suck for scanning a large number of pictures. I would like to be able to use a flatbed scanner to scan multiple images at once, and then slice them into their own separate images.
I will be making WebGL implementations of canny edge detection, and probabilistic Hough line transforms, which I anticipate to be a lot of fun. This will probably take more than a weekend though.
I have some more ideas, like maybe using generative outpainting instead of white padding? Would love more ideas for what I could add to bother, ideally mundane batch image processing tasks that can be automated.
[1] https://excalidraw.com/#json=VB-95wXb8mmw-WEIe2pNu,Abn9sV1Py...
[2] https://github.com/d4mr/bother/blob/main/src/lib/bother.ts