What if instead of a char, getchar() returned an Option<char>? Then you can pattern match, something like this Rust/C mashup:
match getchar() {
Some(c) => putchar(c),
None => break,
}
Magical sentinels crammed into return values — like EOF returned by getchar() or -1 returned by ftell() or NULL returned by malloc() — are one of C's drawbacks. #include <stdio.h>
struct { int err; char c; } myfunc() {
return { 0, 'a' };
}
int main(int argc, const char *argv[]) {
{ int err; char c; } = myfunc();
if (err) {
// handle
return err;
}
printf("Hello %c\n", c);
return 0;
}
This is (semantically) perfectly possible today, you just have to jump through some syntactic hoops explicitly naming that return struct type (because among others anonymous structs, even when structurally equivalent, aren't equivalent types unless they're named...). Compilers could easily do that for us! It would be such a simple extension to the standard with, imo, huge benefits.Every time I have to check for in-band errors in C, or pass a pointer to a function as a "return value", I think of this and cringe.
#include <stdio.h>
#include <tuple>
std::tuple<int, char> myfunc() {
return { 0, 'a' };
}
int main(int argc, const char *argv[]) {
auto [ err, c ] = myfunc();
if (err) {
// handle
return err;
}
printf("Hello %c\n", c);
return 0;
}More stuff like this in https://pdfs.semanticscholar.org/31ac/b7abaf3a1962b27be9faa2...
AFAIK, no? You can return a pointer to a struct, and you can pass whole structs as arguments, but not, IIRC, return them from functions.
EDIT: Apparently you can, sort of, but not portably; how exactly it is defined to work depends on the compiler, and each compiler might define it differently. This means that if you’re using a library which returns a struct and your program use a different C compiler than the library used when it was compiled, your program will not work. I.e. there is no one defined stable ABI for functions returning structs.
Therefore I think it’s reasonable to regard it as impossible in practice.
Getchar doesn’t return a char; it returns an int (https://en.cppreference.com/w/c/io/getchar).
⇒ if C didn’t do automatic conversions from int to char, we would have that (in a minimalistic sense)
That wouldn’t work for ftell and malloc (and, in general, most of the calls that set errno), though.
Dammit, I knew that. Thank you for flagging my blunder; being precise is really important in this case. The Linux manpage better explains the return value of getchar:
https://linux.die.net/man/3/getchar
"fgetc(), getc() and getchar() return the character read as an unsigned char cast to an int or EOF on end of file or error."
getchar() needs to return an object the width of an unsigned char, but all the values in that range are taken by possible character values. The return type had to be expanded to int in order to accommodate the sentinel.
The alternative of using an algebraic type is superior because the end-of-stream condition has a different type (so to speak), and furthermore, the programmer has no choice but to deal with it because the character value comes wrapped inside an Option which must be stripped away before the character value can be used.
Really, you also want the type system to express all possible error conditions as well, since getchar() returning EOF can mean either that end-of-file was reached or that some other error occurred!
As someone who has written lots of C code and worked hard to account for all possibilities manually, I really appreciate it when the type system and APIs can express all possibilities and back me up.
They're part of the C standard library. The POSIX I/O APIs don't have these problems. The Linux I/O system calls are even better because they don't have errno.
Honestly, the C standard library just isn't that good. Freestanding C is a better language precisely because it omits the library and allows the programmer to come up with something better.
That would be the textbook case of stupid over-engineering.
Programs retrieve the data in a file by a system call ... called read. Each time read is called, it returns the next part of a file ... read also says how many bytes of the file were returned, so end of file is assumed when a read says "zero bytes are being returned" ... Actually, it makes sense not to represent end of file by a special byte value, because, as we said earlier, the meaning of the bytes depends on the interpretation of the file. But all files must end, and since all files must be accessed through read, returning zero is an interpretation-independent way to represent the end of a file without introducing a new special character.
Read what follows in the book if you want to understand Ctrl-D down cold.
It's an artifact of that era. Along with "BREAK", which isn't a character either.
GCC only outputs a warning by default: "warning: return type defaults to ‘int’ [-Wimplicit-int]"
Procedural programmers don't generally have a problem with this -- getchar() returns an int, after all, so of course it can return non-characters, and did you know that IEEE-754 floating point can represent a "negative zero" that you can use for an error code in functions that return float or double?
Functional programmers worry about this much more, and I got a bit of an education a couple of years ago when I dabbled in Haskell, where I engaged with the issue of what to do when a nominally-pure function gets an error.
I'm not sure I really got it, but I started thinking a lot more clearly about some programming concepts.
ISO C says that char must be at least 8 bits, and that int must be at least 16. It is entirely legal to have an implementation that has 16-bit signed char and sizeof(int)==1. In which case -1 is a valid char, and there's no way to distinguish between reading it and getting EOF from getchar().
Large swaths of the C standard were built during the heyday of computer design, when you had all sorts of wacky sizes, behaviors and abstractions. Lots of "undefined behavior" is effectively deterministic, because all modern computers have converged to do so many things the same way.
I am begging, please never ever do this. NaN literally exists for this reason. NaN even allows you to encode additional error context and details into the value.
This is a supplementary source of confusion.
> Character 26 was used to mark "End of file" even if the ASCII calls it Substitute, and has other characters for this. Number 28 which is called "File Separator" has also been used for similar purposes. [1]
I think today we would think of character 4 (End of Transmission, Ctrl-D) as the end of file/input marker, but historically Character 26/Ctrl-Z was used, even on disk.
If by procedural you mean, nonsense, then sure... I agree that a function named `getchar` returning an `int` is procedural. :P
(Though by the way: having functions that evaluate to a value when executed is itself a feature that belongs to the functional paradigm, although one so trivial and common that it’s not usually thought as such. But a purely imperative/procedural way of returning values would be via out parameters or global variables.)
When Rust introduced ADTs they were recognizably a concept from functional programming. It's a place or community of practice, not a purely descriptive adjective.
Why are you being snarky?
They clearly mean the issue of modelling partial functions which would normally be done by a side-effect in a procedural language but can’t in a functional language.
For example,
$ python3 -c 'print("".join(chr(c) for c in range(10)))' | python3 -c 'print(list(ord(c) for c in input()))'
will confirm that it doesn't happen in a pipe (the ASCII 4 character there is totally unrelated to EOF).It was sometimes used to have TYPE print something human readable and stop before the remaining (binary) file data would scroll everything away
For binary files, you just assume there is padding at the end of the file to the end of the sector. For text files, the SUB code was used to indicate where the file ended.
One gives a priori information the other a posteriori.
So, is the length of each file stored as an integer, along with the other metadata? This reminds me of how in JavaScript the length of an array is a property, instead of a function that counts it right then, like say in PHP.
Apparently it works. I've never heard of a situation where the file size number did not match the actual file size, nor of a time when the JavaScript array length got messed up. But it seems fragile. File operations would need to be ACID-compliant, like database operations (and likewise do JavaScript array operations). It seems like you would have to guard against race conditions.
Does anyone have a favorite resource that explains how such things are implemented safely?
EDIT: Seems like 26 = EOF is a DOS thing.
EDIT 2: Some confusing comments: https://www.perlmonks.org/bare/?node_id=228760
EDIT 3: A pretty good thread (read NigelQ's replay): http://forums.codeguru.com/showthread.php?181171-End-of-File...
Hoping Cunningham's Law comes into play with this comment. :)
since I am more used to Windows where ctrl-c is copy, I followed other people's suggestion and mapped ctrl-x to do what ctrl-c usually does, with:
stty intr ^X -ixon
This is because X and C are very close, and I couldn't sacrifice ctrl-v (paste) or ctrl-z (background) while I seldom use ctrl-c
I'm sure you could do the same with ctrl-d if you really wanted to.
[1]: https://doc.rust-lang.org/std/io/trait.Read.html#method.read...
(In fact, thinking better about it, there are some cases where `read()` could legitimately return `UnexpectedEof`, like when it's a wrapper for a compressed stream which has fixed-size fields, and that stream was truncated in the middle of one of these fields. It's clear that, in that case, `UnexpectedEof` is not an end-of-file for the wrapper; it should be treated as an I/O error.)
Yes, you can. You just end your stream by closing the pipe.
The exception even tells you that "chr() arg not in range(0x110000)" which has nothing to do with range of C's character types.
https://sourceware.org/bugzilla/show_bug.cgi?id=1190
https://sourceware.org/legacy-ml/libc-alpha/2018-08/msg00003...
> All stdio functions now treat end-of-file as a sticky condition. If you read from a file until EOF, and then the file is enlarged by another process, you must call clearerr or another function with the same effect (e.g. fseek, rewind) before you can read the additional data. This corrects a longstanding C99 conformance bug. It is most likely to affect programs that use stdio to read interactive input from a terminal.
Although interestingly somehow I'm still seeing the old behavior in Debian Buster with glibc 2.28 with python3.
import sys
while True:
b = sys.stdin.read(1)
print(repr(b))
With old glibc with both python2 and python3 the EOF isn't sticky (as expected). With 2.28 with python2 the EOF is sticky (like you said). With 2.28 with python3 it's not sticky for some reason.^D (0x04) is EOT and 0x03 is EOText: https://www.systutorials.com/ascii-table-and-ascii-code/
So, kinda, but somehow I'm happy it never got turned into a weird combinations depending on the OS.