This is pretty useful when creating a CLI for pretty much any app, and I've used it regularly to generate a CLI for an app.
My post on how to do it: http://austingwalters.com/export-a-command-line-curl-command...
Imagine something like this with curl:
curl << eof
http://example.com/a.htm
http://example.com/b.htm
eof
where curl only opens a single connection.Alas, AFAIK, pipelining is still not enabled in the curl binary.
As I understand it, the --libcurl option only generates code for what is possible with the curl binary, e.g., curl_easy_init(), curl_easy_setopt(), etc.
As such, it will not generate code using curl_multi_init(), curl_multi_setopt(), etc.
I have to automate the code generation myself.
curl --libcurl 1.c http://example.com/a.htm http://example.com/b.htm
grep curl_multi_init 1.c
https://curl.haxx.se/mail/archive-2008-02/0036.htmlI know cURL does 1001 other things too, so the two tools aren't really in competition. HTTPie is more akin to Postman or Insomnia.
1. By default curl doesn't follow redirects and I think most use-cases (not all) require that behavior (at least from the cli).
2. Similar to wget, many users who start using curl do it to download something, probably a file. But opposite to wget curl doesn't write a file but to stdout. Actually, I find curls behavior much more UNIX style, but it is probably the first obstacle every user has to tackle. Nevertheless, in the end this makes curl easier to use, because you do not have to remember which parameter is used to set the output file name, but instead just use the universal unix operator '>'.
not to mention this brings windows closer and closer to nix. windows making itself more familiar to nix increases the number of ops folks that are willing to transition from nix shops to windows, increasing windows share in the server market
plus, as others have pointed out, curl handles protocols besides http.
Now I have hope they'll put in a text editor that understands unix line endings.
https://www.bleepingcomputer.com/news/microsoft/microsoft-ap...
Probably the same reason that the CMD shell is so bad, and is only now being fixed up: since Windows is proprietary, nothing can be improved unless either the Microsoft decision-makers that are responsible for the product choose to spend budget on it, or there is a directive from higher-up in the company.
Ever tried raw output to terminal?
Indeed. I am running latest High Sierra and:
curl --version
curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0
LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0Not to mention that the PowerShell equivalents of a lot of *nix commands are _much_ better. "Everything is an object" is a brilliant philosophy and it's a joy to use.
Do you think we'll see things like the good old "curl <some url> | bash" for Windows now? They still have no package manager worth using.
The 'download | execute' paradigm, on the other hand, is the complete antithesis of package management, to where your desire to obtain and execute the code trumps your willingness and patience to wait until it has been vetted by your preferred package manager, and installed in a less haphazard way. I fail to see your point.
That's still not a package manager worth using.
I would be thankful if the habit of trusting a random IP with control of one's shell could die, forever.
There are various arguments against the curl-piped-to-shell idiom but "random ip" doesn't seem like a valid one.
Also, take a look at Chocolatey for package management. Yeah it's not a built-in thing, but it's pretty decent.
The Programs & Features pane works just fine for me as a package manager.
The only reason I can think of is if the script partially downloads and only half executes. Doesn't seem "massive" though...
Looking briefly at the list at https://curl.haxx.se/docs/security.html I see issues for FTP (x2), IMAP, and TFTP in 2017 alone. These protocols which are outside of curl's core competency of http are likely to have less scrutiny and more bugs. While FTP shouldn't be removed from curl I don't think a protocol like TFTP or gopher is crucial, and I wouldn't mind too much if it got the axe in a distribution I used
> Finally, I’d like to add that like all operating system distributions that ship curl (macOS, Linux distros, the BSDs, AIX, etc) Microsoft builds, packages and ships the curl binary completely independently from the actual curl project.
Why would everyone rebuild it? There are some security considerations (matching source and binary; disabling "dangerous" stuff) and some feature considerations (disable stuff you don't need to reduce resource usage - maybe), but conceptually this seems so wrong to me.
Conceptually I'd want downstream packagers to talk to upstream developers so that upstream has reasonable defaults and settings and I'd want packagers to just package and make the package follow distribution conventions. But rebuild seems overkill.
Maybe I'm missing something obvious?
It also allows for ease of patching in a stable release -- generally it's preferred to just fix specific high-impact bugs rather than moving to a new upstream version, which might introduce regressions.
(Context: I'm a Debian developer and on the Ubuntu MOTU team)
There are ways to work around it, but it gets messy quickly. And rather than clean up their act they start championing things like Flatpak, that is basically a throwback to the DOS days of everything living in their own folder tree with a bit of souped up chroot thrown on top.
I really expect that if the likes of flatpak becomes mainstream in the Linux world having some flaw being found in a lib somewhere will produce a stampede of updates because every damn project crammed in a copy to make sure it was present.
On Gentoo et al, the end user does the building. OK, yes the ebuilds are recipes but I've lost count of the times I've used epatch_user (https://wiki.gentoo.org/wiki//etc/portage/patches). You have a near infinite choice of ways to destroy your system, what with USE flags, mixed stable/unstable and all the other crazy stuff. Despite that, my Gentoo systems have been surprisingly stable.
In winter an update world session on a laptop keeps you (very) warm 8)
(wrt "Context": Ta for your work)
I have seen many times Debian packagers try to build something from upstream and find that it just does not build anywhere other than the maintainer's computer. The fact that Debian requires that every package it ships can be built buy anyone in a generic environment is immensely valuable to free software, even if nobody used the binaries that Debian built. (And to be clear, other distros do the same, I'm just most familiar with Debian.)
I'd agree that in an ideal world, all the patches would be upstream and the binary would not just be equivalent but bit-for-bit reproducible. Some practical reasons why it wouldn't be are that various dependencies are of slightly different versions (e.g., one distro manages to get a new libc uploaded a little bit before another), that downstream conventions are different in different distros (e.g., Red Hat-style distros use /lib64 and Debian-style distros use /lib/x86_64-linux-gnu), or that a dependency has some shortcoming which many but not all distros patch in the same way, and the patch impacts things that use the dependency (e.g., upstream OpenSSL <1.1 does not have symbol versions, but most Linux distros patch them in). Yes, in an ideal world, all these things would be resolved, but there are going to be so many tiny things like this that come up that having infrastructure to accommodate them is the right plan.
Also, as a distro provider, you will want to be sure you can build the application yourself, because you might want to ship an updated library dependency that is ABI-incompatible and so you must be able to rebuild the consumers of these libraries. For example curl, in the case of openssl.
Better let the building be done by people who are doing building all day en masse for a single arch, or have infrastructures set up to do builds for multiple archs easilly, than to expect every upstream to have this setup.
(1) Use of a different libc (alpine linux with musl)
(2) most builds are not reproducible, rebuilds are needed for security reasons
(3) non-rolling release distros (-> most distros) fork the upstream projects to backport fixes for their older releases
(4) Different filesystem structure (Gobo Linux)
(6) Most distros want to use the build system associated with their own package manager
There are probably many more than that. Most distros don't even try to stay close the upstream repo and instead maintain a lot of patches.
In the future we will most likely have an distro-specific basic system build and container apps (snap, appimage, flatpak) build directly by upstream on non-server systems.
I sure hope not. A better system would be something like NIX/GUIX or even Gobolinux, that give us a single package as today, but with the option of installing multiple versions in parallel if upstream has screwed up the API (again).
Flatpak and like will just be and excuse for upsteam to bundle everything and the kitchen sink, resulting in bloat and having to update a mass of paks rather than individual libs in case a flaw is found.
8) output binary format (though these days people generally only use ELF or PE),
9) compile time options (eg some packages will allow you to choose which TLS library to use at compile time)
10) hardware specific optimisations
Basically just a plethora of reasons
There are many different package managers depending on which Linux you're running.
Debian and derivatives use apt, Red Hat uses yum or dnf, SuSE uses yast, Gentoo has emerge, Arch has pacman... So each package manager needs to build their own package and it's easier to recompile from source than slice and dice a binary.
Also distributions will install the binaries, shared libraries, man pages, etc to different locations (some to /usr, some to /use/local, etc) which is also easier to define at configuration since most autoconf/make files support this already.
Finally they might want to add patches for distribution specific features or quirks. Or maybe they compile with ulibc instead of glibc.
There are many valid reasons why distributions would and should take the upstream source and build/package it themselves.
People have pointed out tons of reasons why distros do their own builds, but really I don't understand what would possibly be a reason not to!?
The build system itself also is just a piece of software that gets distributed along with the source of the software to be built, and just as you can download and run curl and get a predictable result (the download of some URL), you can download and run the curl build system and get a predictable result (a cURL binary).
So, in that regard, what does it matter what execution of the cURL build system your cURL binary came from?
In particular with the trend towards reproducible builds, where the build result will be bit-identical between different runs of the build system if it's using the same compiler (version), I just don't see why you care!?
Yeah, distros shouldn't just modify the software they package willy-nilly, but that's completely orthogonal to whether they should do their own builds. There are many reasons for applying small changes to enable integration with the distro, and in particular it's just unreasonable to expect that everyone of the thousands of upstream authors or Debian packages, say, operate machines of all ten CPU architectures that are currently supported by Debian so they can provide binaries for all architectures. So, if you want to be able to distribute software written by people who don't happen to have a System z or a MIPS machine in Debian, maintainers have to prepare packages in such a way that binaries for those architectures can be built--and if they do that, it's trivial to also build all the other architectures from the same source. Adding special cases for when a binary is already available for some architecture from upstream would be just completely pointless complexity.
This is somewhat tongue-in-cheek, but it's actually a question I don't have a solid yes or no answer to. I can see it both ways.