E.G,
curl https://dotfilehub.com/knoebber/emacs
results in plain text
curl https://fuse.pl/beton/10print.html # with code highlighting
curl https://fuse.pl/beton/cegla.html # just prose while True: print(choice("\\/"), end="")
(the `format` call doesn't seem needed and `choice` works on any iterable, including strings of characters).Al Sweigart has a repository of "scroll art" similar to 10 print that might interest you: https://github.com/asweigart/scrollart
Thanks!
How does the server know what characters to send so that my specific command line interprets it in a nice way? Sorry if I'm not being very articulate.
EDIT: doesn't work when piping to less.
> EDIT: doesn't work when piping to less.
try -R
What if you pass '-r' to less? (I can't verify myself because it worked without -r for me)
Example, rather than https://mahdi.blog/raw/raw-permalinks-for-accessibility/, it would be https://mahdi.blog/raw-permalinks-for-accessibility.raw
It's a minor nitpick really, but I quite like this idea! I think I'll try to implement this for my website too.
As for the other people here wondering why User Agents weren't used for this:
- Using static website hosting goes out the window, which is quite a shame because it makes everything so much easier
- User agents are pretty terrible for determining capabilities and intent (what if someone was using curl to get an actual webpage?)
- It will never cover all types of HTTP clients (a whitelist is pretty terrible as we have seen from various online services restricting Firefox users or Linux users from certain features for no other reason than their user agents weren't in the list the developers used to check for the presence of certain features).
I never understand comments like these. Now if my terminal is 78 characters it is a mess or if it is 100 characters it is wasting space. If you just don't wrap the lines my terminal does it at the right width.
Hard wrapping doesn't work well. You need to know the target width to wrap and you don't know that until someone actually opens the file. Every viewer I have ever tried is excellent at soft-wrapping. Let it do its thing.
This is possible with HTML and CSS, but not in plaintext. I think wasted space is something I can handle, but a badly wrapped code is something I dislike.
Just tried it on a vintage early-80's 40-column terminal, and works better than I expected. I thought that a lot of words would be cut off on the right side, but the wrapping was about 90% correct. Perhaps it's just a coincidence, but this is what happened just now.
lynx -dump <regular URL>
elinks -dump <regular URL>
(not the same thing of course, but it doesn't require anything from the server other than reasonable HTML)
If the site is not a static one, you could check the request's User Agent server-side, and return the raw version directly (or redirect to /foo/raw) if the UA contains 'curl' or 'wget'.
If the site is static and you are able & willing to change your vhost config, you could detect the UA too, and redirect to /foo/raw.
Just a few ideas. This is a fun little project you've got here. Well done.
1: https://sr.ht/~sircmpwn/kineto/
(i do this for my blog, anachronauts.club/~voidstar. i kind of hate gemini-the-protocol, but love gemtext-as-default and love having a space where text-forward content reigns.)
Here’s how to make it pretty-ish: https://github.com/dbohdan/caddy-markdown-site
Serving just the markdown as plaintext to e.g. Lynx is straightforward.
Discussion here:
Edit: fixed it by moving the script to its own file.
Example:
curl https://mahdi.blog/raw/self-hosted/ | less
EDIT: typo fixings.
I'm guessing the blog is made by a static site generator, so the above is harder than it seems. I suppose one could add a reverse proxy that redirects to /raw/$PAGE when it sees "curl".