http://git.chromium.org/gitweb/?p=chromium.git;a=tree;f=net/... see all the "directory listing" files.
The unit tests are full of scary cases like
// Tests for "ls -l" style listing in Russian locale (note the swapped
// parts order: the day of month is the first, before month).> MDY: Belize, USA, parts of Canada, Philippines, Saudi Arabia
> YMD: Japan, China, Iran, small bits of Europe
> DMY: Probably 3/4 of the planet's land surface
That said, YMD should make the most sense (and is most consistent with the universally accepted HMS time format). I try to use YMD wherever I can.
-rwxrwxr-x 1 ftp ftp 123 23 \xd0\xbc\xd0\xb0\xd0\xb9 2011 test
The note about day, month order is useful because all other tests have "%b %e" format and rows are full of other numbers: drwxr-xr-x1732 266 111 90112 Jun 21 2001 .rda_2
http://git.chromium.org/gitweb/?p=chromium.git;a=blob;f=net/...(Seriously, people forget how awesome rsync is. And it tunnels/compresses nicely over SSH)
Compare to ftp, sftp and scp, which overwrite the file from the start, possibly breaking your site during the transfer (or definitely breaking your site if the transfer dies part-way through).
which is really troublesome when you're transferring a big file and the target doesn't have enough space for a second copy of that particular file. I, for one, would prefer that "feature" to be optional.
The client listening was largely solved by 'passive' mode, and just about every server and client supports this now.
The firewall and NAT interaction is awkward, but most modern firewalls can deal with this automatically (as long as there's no SSL involved)
And yes, the RFC is 20 years old. But so are many RFCs for long-established veteran protocols that we use all over the net.
In many cases I'd be happy to see FTP replaced (all those anonymous FTP servers may as well be HTTP now), but it's really not that bad.
And so, yes, firewalls and NATs "interact" with FTP --- because they all had to be hacked specifically to deal with the FTP protocol, which moots your rebuttal --- but in doing so they create additional security issues, like the NAT pinning stuff Samy Kamkar posted last year.
It shouldn't be necessary for middleboxes to hack in support for protocols by in-place editing TCP streams and dynamically changing filter/translation rules based on stuff that happens inside of connections. But, thanks to FTP, they do have to do that.
And for what? FTP is even on its own terms a terrible file transfer protocol! For instance, look at how it handles (or, commonly, doesn't handle) file listings.
FTP is an anachronism. It has no more reason to exist today than TFTP --- both were designed to make up for constraints in client software that simply no longer exist anywhere.
having the two separate connections really is an anachronism which proves to be a big hassle or all parties.
And not even IPv6 will slve this one as a firewall still needs to know what port to let data through. And because the PORT command lists IP addresses, you can't even transparently run FTP over v6. It's one of te protocols where users need to have special protocol awareness beyond the length of the IP address.
I think most firewalls solve this by actually rewriting the FTP packets on the fly (IIRC Cisco calls these "fixups"). That's seriously, seriously broken.
SSH and SCP provide a much more secure alternative. SFTP is a hack that suffers from all the same problems.
Now the religion of SMTP.. I often wonder how the spam situation would look if we redid SMTP and required certified TLS. What's worse, we still use SMTP for real stuff every day, I get emails from Etrade, Fidelity and others that I simply don't want flying through cyberspace decrypted even if they don't contain real information.
[1]: Before you say that my anecdotal evidence does not a fact make, my point is that there's nothing forcing us to use FTP today. Even the lousiest web host supports SFTP, and none of my machines or VPSs run an FTP server. There's no reason to proclaim that we need to kill FTP, because FTP is a non-issue in today's world. Sure, you can still use it, the same way you can still use telnet if you'd like. But practically nothing relies on it with no alternative.
Many moons ago in my younger days I wrote an FTP based file/tree synchronisation tool. I have since vowed to never touch FTP again.
There's a few command line parameters to learn, but after that it is so simple, efficient and reliable that there is no need for a GUI client such as the various FTP FTP clients. I typically write scripts for specific purposes, syncing only the files that have changed. Using my SSH key means I don't have to type the password all the time.
WebDAV is a trainwreck. SFTP could be nice but the OpenSSH impl falls terribly short as a FTPd replacement (the most useful implementation is ironically the one in ProFTPd). Sendfile never went anywhere. Network filesystems don't cut the FTP use-case either.
People don't use FTP because they like it. They use it for the lack of a viable alternative.
The only reason I ever use ftp is because I'm forced to with my godaddy hosting.
any specific reason?
Well it's a good thing that just about every FTP server in existence supports the MLSD command then.
Requirements:
* It should only deal with filesystem operations (i.e. SSH is way too problematic).
* It should be firewall-, NAT-, proxy- and browser-friendly.
* It should have cross-platform servers and clients, fully operational via CLI as well as GUI.
* it should be secure (i.e. no http, thanks; https, maybe.)
I'd say it's potential startup territory, except I cannot see any monetization opportunity. Also, the usual XKCD on standards: http://xkcd.com/927/
If, on the other hand, you want a system that meets your last 3 requirements, just use SSH/SFTP.
Many recent OSes support it in some fashion natively, there is a cli client (cadaver), and it runs fine over ssl.
It is rather heavyweight and slow though, and doesn't offer quite the range of things as some ftp servers (ratios, etc). That is likely just due to the lack of popularity and the few options for servers.
SFTP (which you discounted) is probably a better option in many cases though.
A client implementation has been available for Linux since version 2.6 (http://cm.bell-labs.com/wiki/plan9/v9fs/index.html).
Edit: I just found a very recent paper on Improving the performance of Styx based services over high latency links (Styx being the Inferno name for 9P): http://gsyc.es/tr-docs/RoSaC-2011-2.pdf
Any openssh server that supports scp will support sftp unless it has been explicitly disabled.
Critically, there's only a few missing things in my estimation, and they revolve around resumable uploading and structured directory listings for GUI clients.
What hinders wide deployment is the server side: the most widely known implementation is mod_dav for apache and apache was never really made for a common use-case of FTP which is people using it with their unix account credentials for transmitting files.
If you have access to an OSX server, have a look at all the hoops they had to jump through to allow WebDAV with mod_dav and still do that in the context of the corresponding system user.
The other reason for FTP still being popular is legacy systems: over the years, I interfaced so many ERP systems for our product and usually, the only thing that customers can provide is good old FTP (or direct database access).
As this scenario doesn't involve unix accounts, I would love to use WebDAV for all the reasons outlined in te article, but body supports it on their end, despite it being around for 20 yars or so.
http://blog.expandrive.com/2009/02/02/ftp-considered-harmful...
With OpenSSH it's pretty easy to setup users in a chroot, with no shell access. The hard part is if you want logging (to audit what they do), then you need to create a log device in their chroot, which adds complication, and you need OpenSSH 5.2+.
sshd_config:
Subsystem sftp internal-sftp -l VERBOSE
Match Group sftponly
ChrootDirectory %h
ForceCommand internal-sftp -f AUTH -l VERBOSE
Make user1 only be able to use SFTP: groupadd sftponly; usermod -a -G sftponly user1; usermod -s /sbin/nologin user1
If you actually want the "-l VERBOSE" logging to work you need to create ~user1/dev, then modify rsyslog.conf with: $AddUnixListenSocket /path/to/user1/home/dev/logSince then, it is easy to setup a chroot'ed account which easily acts like an ftp server. Have a look at ChrootDirectory and the internal-sftp subsystem.
I use rbsync to backup to S3 and it really works great but was a pain to setup as it's command line only.
And those mentioning rsync its not used for the same use cases as ftp.
ftp is usefull for quickly transfereing a few files between systems - rsysnc is used for totaly diferent