Does SDL3 still use integers for coordinates? I got annoyed enough by coordinates not being floating point in SDL2 that I started learning WebGPU, instead. This was
even though the game I was working on was 2D.
The issue is, if you want complete decoupling (in the sense of orthogonality) among all four of:
- screen (window) size & resolution (especially if game doesn't control)
- sprite/tile image quantization into pixels (scaling, resolution)
- sprite display position, with or without subpixel accuracy
- and physics engine that uses floating point natively (BulletPhysics)
then to achieve this with integer drawing coordinates requires carefully calculating ratios while understanding where you do and do not want to drop the fractional part. Even then you can still run into problem such as, accidentally having a gap (one pixel wide blank column) between every 10th and 11th level tile because your zoom factor has a tenth of a pixel overflow, or jaggy movement with wiggly sprites when the player is moving at a shallow diagonal at the same time as the NPC sprites are at different floating point or subpixel integer coords.
A lot of these problems could be (are) because I think of things from bottom up (even as my list above is ordered) where a physics engine, based on floating point math, is the source of Truth, and everything above each layer is just a viewport abstracting something from the layer beneath. I get the impression SDL was written by and for people with the opposite point of view, that the pixels are important and primary.
And all (most) of these have solutions in terms of pre-scaling, tracking remainders, etc. but I have also written an (unfinished) 3D engine and didn't have to do any of that because 3D graphics is floating point native. After getting the 2D engine 90% done with SDL2 (leaving 90% more to go, as we all know), I had a sort of WTF am I even doing moment looking at the pile of work-arounds for a problem that shouldn't exist.
And I say shouldn't exist because I know the final output is actually using floating point in the hardware and the driver; the SDL1/2 API is just applying this fiction that it's integers. (Neither simple, nor direct.) It gets steam coming out my ears knowing I'm being forced to do something stupid to maintain someone else's fiction, so as nice as SDL otherwise is, I ultimately decided to just bite the bullet and learn to program WebGPU directly.