You hear complaints ad nauseam about how abstractions are always leaky, they are confusing, etc. And yet here I am typing this into a text box displayed in a web browser running on top of an operating system and countless libraries, rendered from a mix of content of HTML, Javascript, and CSS source which was delivered over TLS-wrapped HTTP using TCP sockets routed over an IPv4 connection which was repeatedly translated back and forth into Ethernet frames, all on a machine which schedules the execution of binary blobs compiled from a multitude of languages to the AArch64 instruction set onto any number of virtual cores while carefully managing resources like permanent storage, processor cache, random-access memory, guided by electrical signals from a keyboard which are decoded into meaningful glyphs according to my configured layout and character set, all displayed upon an OLED film based upon an HDMI-encoded signal transmitted over a bundle of wires.
Which is to say we spend every day comfortably resting upon a truly mind-boggling tower of abstraction layers which—for the most part—work pretty well. So not only clearly can it be done, but it also must be done in order to provide anything like the computing experiences we expect and rely on in day to day life.
Rather than shy from abstraction because bad abstractions are bad, we should spend more effort on learning how to design and promote good abstractions, since they are something upon which our entire profession is inescapably built.