My main take-away from this is that Google Drive seems like a nice way to put presentations online :-)
Don't. I've been trying to access the presentation for 10 minutes and it won't allow me:
Wow, this file is really popular! Some tools might be unavailable until the crowd clears.
and then I get redirected to https://support.google.com/accounts/bin/answer.py?hl=en&... (which is stupid, because there's nothing cached/cookied for google. In fact, I'm in Firefox's "Private Browsing") #define struct union
#define else
That's evil. I have to do it in someone's code some day just to have some fun.But, apart from that, it's a really nice compilation. I didn't know about the compile time checks of array sizes, but I have a doubt. What if I pass to a method declared
int foo(int x[static 10])
this pointer int* x = (int*) calloc(20, sizeof(int));
Does the compiler skip the check? Does it give me a warning?EDIT: Funnily enough, in Mac it doesn't give any warning, neither for pointers nor for undersized arrays (ie, foo(w[5]) doesn't give a warning). And I've compiled with -std=c99 -pedantic -Wall.
Edit: While we're talking about dark corners, please stop casting functions that return void * . If your code lacks the declaration of the function, the compiler will assume pre ANSI-C semantics and generate code returning an int.
On machines where pointers do not fit ints (basically all 64bit machines), you just silently (due to the cast there is no warning) truncated a pointer. Worse, it may work depending on the malloc implementation and how much memory you allocate.
We have to fix these kinds of bugs on OpenBSD a lot, please help by typing less and let the compiler warn you about silly mistakes :-)
And yes, C++ fucked this up for C. I'll leave it to Linus to say something nice about that..
Better, please help by compiling your C code with -Wimplicit-function-declaration (included in -Wall), and fixing all the problems it reports. Then you won't have to worry about this problem, or a bunch of other problems.
btw; I wrote this talk.
Thanks, I will take that into account.
The problem is, if I want my C code to compile with MSVC, it has to compile as C++ - and even if I abhor Windows for development myself, a lot of developers are using MSVC.
I just wish Microsoft would update their C compiler, at least to C90. But then I suppose the standard has only been around for 23 years, and nobody really uses C anyway.
typedef void ( * fptr_t)()
and cast to fptr_t instead of void * But it's bigger problem how to force users to cast it back to proper prototype, because casting it back to something else will give UB.
Wow! Ugly! Scary! Another good reason to know in fine details just what cast does. At one point, my Visual Basic .NET code actually calls some old C code, and in time I will need to convert to 64 bit addressing. So, I will keep in mind that with 64 bits I have to be especially careful about pointers and C.
Sound static analyzers fall in the first case, but require a lot of work to become precise enough to be used (ie, to reduce the number of false alarms). Compilers fall in the second case in the sense that they don't have to honor such a clause. And in the C99 norm it's actually a "shall" (it just couldn't honor a "must" in that case):
"If the keyword static also appears within the [ and ] of the array type derivation, then for each call to the function, the value of the corresponding actual argument shall provide access to the first element of an array with at least as many elements as specified by the size expression."
You can not have a general procedure, but with the help of the programmer / user of the compiler, you can prove all kinds of things.
int foo(int x[static 10])
is not to produce a warning - that's just a nice possible side-effect (and only in some cases).The real purpose is to allow the compiler to optimise the compilation of the foo() function itself, under the assumption that x will always point to the first element of an array of at least 10 elements.
I don't believe C has any such restrictions, though.
char main[] = { 0xf0, 0x0f, 0xc7, 0xc8, 0xc3 };
(and yes, my machine -- a Pentium MMX -- hung solid and I was rather shocked!)My gcc compiles it with only this warning:
foo.c:2:6: warning: ‘main’ is usually a function [-Wmain]
hah! [23] .got.plt PROGBITS 0804954c 00054c 000014 04 WA 0 0 4
[24] .data PROGBITS 08049560 000560 000010 00 WA 0 0 4 <---
[25] .bss NOBITS 08049570 000570 000008 00 WA 0 0 4
66: 0804840a 0 FUNC GLOBAL HIDDEN 14 __i686.get_pc_thunk.bx
67: 08049568 5 OBJECT GLOBAL DEFAULT 24 main <---
68: 08048278 0 FUNC GLOBAL DEFAULT 12 _init
The main symbol is a relocation in .data, not .text. Which is as you would expect given that declaration. You might be able to get around that by doing something like unsigned char code[] = { 0xf0, 0x0f, 0xc7, 0xc8, 0xc3 };
int main(void)
{
((void (*)())code)();
return 0;
}
But these days NX will usually ruin the fun. int x = 'FOO!';
will not make demons fly out of your nose: it is not undefined behaviour. It is guaranteed to produce a value; the specific value is implementation defined (that is, one that the compiler vendor has decided and documented), but it is an integer value, not a demon value.I'm sure, though, that someone sooner or later will be bitten by code like
int x = 'é';
which is equally implementation-defined.int x = 'RIFF';
.. if you were packing a WAVE file header.
int x = 'A';
is also implementation-defined.Nevermind, it would take more than the Lord of the Rings triology.
I remember hearing that the disallowal of pointer aliasing was the main reason that it was possible for a Fortran compiler to produce code that could outperform code from a C compiler: It allows the compiler to perform a new class of optimizations.
It would appear that the restrict keyword lets C programs regain that class of compiler optimizations.
The automatic conversions in JavaScript and PHP seem pretty harmless by comparison.
But "restrict" is a low-level micro-optimization, those tend to be tricky. I don't think a sane C programmer would sprinkle that keyword all across the source base, because as you have pointed out it can cause hard-to-diagnose errors.
In contrast, the automatic conversions in JavaScript and PHP are an "always on" feature you cannot avoid.
So... not that telling then.
Of the C "dark corners" that are problematic, it'd be extremely rare to run into them in most real-world code. You'd have to intentionally go out of your way to write code that will trigger them, and this code often looks obviously suspicious.
It's very much the opposite with JavaScript and PHP. A world of pain and danger opens up the moment you do something as simple as an equality comparison. The problems that can and will arise are well documented, so I won't repeat them here, but it's a much worse (and unavoidable) situation than when compared to C, C++, Java, C#, Python, Ruby or other mainstream languages.
http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-de...
But about those dark corners, I guess the point wasn't to present any particularly nasty gotchas, but rather some precious little lesser known tricks. C has plenty of very well known features you can be bitten by (mostly related to memory management, of course). While the presentation reiterates over some of them, the most valuable parts are about various _good_ parts of the language which are rarely heard of (viz. the usage of `static` inside brackets).
This allows, for example, compilers to replace a `memcpy()` call that has a constant size argument with direct loads/stores.
Author got this wrong, that would be an empty file, which is what won the IOCCC for smallest self-replicating program once.
http://www.amazon.com/Expert-Programming-Peter-van-Linden/dp...
(Sorry for of offtopic)
You can actually get a firm grasp of the basics just by reading chapter 3 from Computer Systems: A Programmer's Perspective (http://csapp.cs.cmu.edu/public/samples.html) and practice writing some simple command line programs.
First, what are malloc() and free() doing? That is, what are the details, all the details and exactly how they work?
It was easy enough to read K&R, see how malloc() and free() were supposed to be used, and to use them, but even if they worked perfectly I was unsure of the correctness of my code, especially in challenging situations, expected problems with 'memory management' very difficult to debug, and wanted a lot of help on memory management. I would have written my own 'help' for memory management if I had known what C's memory management was actually doing.
'Help' for memory management? Sure: Put in a lot of checking and be able to get out a report on what was allocated, when, by what part of the code, maybe keep reference counters, etc. to provide some checks to detect problems and some hints to help in debugging.
That I didn't know the details was a bummer.
It was irritating that K&R, etc. kept saying that malloc() allocated space in the 'heap' without saying just what they meant by a 'heap' and which I doubt was a 'heap' as in heap sort.
Second, the 'stack' and 'stack overflow' were always looming as a threat of disaster, difficult to see coming, and to be protected against only by mud wrestling with obscure commands to the linkage editor or whatever. So, I had no way to estimate stack size when writing code or to track it during execution.
Third, doing data conversions with a 'cast' commonly sent me into outrage orbiting Jupiter.
Why? Data conversion is very important, but a 'cast' never meant anything. K&R just kept saying 'cast' as if they were saying something meaningful, but they never were. In the end 'cast' was just telling the type checking of the compiler that, "Yes, I know, I'm asking for a type conversion, so get me a special dispensation from the type checking police.".
What was missing were the details, for each case, on just how the conversion would be done. In strong contrast, when I was working with PL/I, the documentation went to great lengths to be clear on the details of conversion for each case of conversion. I knew when I was doing a conversion and didn't need the 'discipline' of type checking in the compiler to make me aware of where I was doing a conversion.
Why did I want to know the details of how the conversions were done? So that I could 'desk check' my code and be more sure that some 'boundary case' in the middle of the night two years in the future wouldn't end up with a divide by zero, a square root of a negative number, or some such.
So, too often I wrote some test code to be clear on just what some of the conversions actually did.
Fourth, that the strings were terminated by the character null usually sent me into outrage and orbit around Pluto. Actually I saw that null terminated strings were so hopeless as a good tool that I made sure I never counted on the null character being there (except maybe when reading the command line). So, I ended up manipulating strings without counting on the character null.
Why? Because commonly the data I was manipulating as strings could contain any bytes at all, e.g., the data could be from graphics, audio, some of the contents of main memory, machine language instructions, output of data logging, say, sonar data recorded on a submarine at sea, etc. And, no matter what the data was, no way did I want the string manipulation software to get a tummy ache just from finding a null.
Fifth, knowing so little about the details of memory management, the stack, and exceptional condition handling, I was very reluctant to consider trying to make threading work.
Sixth, arrays were a constant frustration. The worst part was that could write a subroutine to, say, invert a 10 x 10 matrix but then couldn't use it to invert a 20 x 20 matrix. Why? Because inside the subroutine, the 'extents' of the dimensions of the matrix had to be given as just integer constants and, thus, could not be discovered by the subroutine after it was called. So, basically in the subroutine I had to do my own array indexing arithmetic starting with data on the size of the matrix passed via the argument list. Writing my own code for the array indexing was likely significantly slower during execution than in, say, Fortran or PL/I, where the compiler writer knows when they are doing array indexing and can take advantage of that fact.
So, yes, no doubt as tens of thousands of other C programmers, I wrote a collection of matrix manipulation routines, and for each matrix used a C struct to carry the data describing the matrix that PL/I carried in what the IBM PL/I execution logic manual called a 'dope vector'. The difference was, both PL/I and C programmers pass dope vectors, but the C programmers have to work out the dope vector logic for themselves. With a well written compiler, the approach of PL/I or Fortran should be faster.
It did occur to me that maybe other similar uses of the C struct 'data type' were the inspiration for Stroustrup's C++. For more, originally C++ was just a preprocessor to C, and at that time and place, Bell Labs, with Ratfor, preprocessors were popular. Actually writing a compiler would have permitted a nicer language.
Seventh, PL/I was in really good shape some years before C was started and had subsets that were much better than C and not much more difficult to compile, etc. E.g., PL/I arrays and structures are really nice, much better than C, and mostly are surprisingly easy to implement and efficient at execution. Indeed, PL/I structures are so nice that they are in practice nearly as powerful as objects and often easier and more intuitive to use. What PL/I did with scope of names is also super nice to have and would have helped C a lot.
Eight, the syntax of C, especially for pointers, was 'idiosyncratic' and obscure. The semantics in PL/I were more powerful, but the syntax was much easier to read and write. There is no good excuse for the obscure parts of C syntax.
For a software 'platform' for my startup, I selected Windows instead of some flavor of Unix. There I wanted to build on the 'common language runtime' (CLR) and the .NET Framework. So, for languages, I could select from C#, Visual Basic .NET, F#, etc.
I selected Visual Basic .NET and generally have been pleased with it. The syntax and memory management are very nice; .NET is enormous; some of what is there, e.g., for 'reflection', class instance serialization, and some of what ASP.NET does with Visual Basic .NET, is amazing. In places Visual Basic borrows too much from C and would have done better borrowing from PL/I.
I've written some assembler in the machine language of at least three different processors. On one machine I was surprised that my assembler code ran, whatever it was, 5-8 times faster than Fortran. Why? Because I made better use of the registers. Of course, that Fortran compiler was not very 'smart', and smarter compilers are quite good at 'optimizing' register usage. I will write some assembler again if I need it, e.g., for
R(n+1) = (A*R(n) + B) mod C
where A = 5^15, B = 1, and C = 2^47. Why that calculation? For random number generation. Why in assembler? Because basically want to take two 64 bit integers, accumulate in two registers the 128 bit product, then divide the contents of the two registers by a 64 bit integer and keep the 64 bit remainder. Due to the explicit usage of registers, usually need to do this in assembler.
But at one point I read a comment: For significantly long pieces of code, the code from a good compiler tends to be faster than the code from hand coded assembler. The explanation went: For longer pieces of code, good compilers do good things for reducing execution time that are mostly too difficult to program by hand which means that the assembler code tends to be using some inefficient techniques.
Everything is about tradeoffs. Fortran uses space-padded strings with no null terminator. On the positive side, this forces everyone to explicitly pass the length they mean instead of relying on more work at runtime to figure out when to stop by looking for the null sentinel. Passing explicit lengths is good practice in C anyway because you usually avoid having to scan the contents multiple times / multiple calls to strlen at different levels in the stack. While everything should be better in the Fortran case, the class of bugs that persist are even more hard-to-find bugs because poorly written code mis-calculates the length, ignores it, etc., stomping over adjacent memory. This probably won't crash, and since other code has to use an explicit length when accessing the buffer, you usually won't notice the problem at the source of the issue. Contrast that with C, where you're more likely to see an issue immediately as soon as the string is used or passed to something else.
tl;dr Poor programming is poor programming in any language.
With PL/I the maximum length of the string is set when the string is allocated, usually dynamically during execution. The length can be given as a constant in the source code or set from computations during execution. There is also a current length <= the maximum length. When passing that string to a subroutine, the subroutine has to work a little to discover the maximum string length, but, by in effect 'hiding' both the current and maximum length from the programmer of the subroutine, the frequency of some of the errors you mentioned should be reduced.
In Visual Basic .NET, the maximum length of any string is the same, as I recall, 2 GB. Then having the strings be 'immutable' was a cute approach, slightly frustrating at times but otherwise quite nice and a good way to avoid the problems you mentioned.
But, of course, the way I actually used strings in C was close to the way they were supported in Fortran.
And, of course, likely 100,000+ C programmers wrote their own collection of string handling routines where use a struct to keep all the important data on the string, say, allocated or not, pointer to the allocated storage, maximum allocated length, current length, etc. (multi-byte character set anyone?) and then pass just a pointer to the struct instead of a pointer to the storage of the string; in this way, again should reduce the frequency of some of the errors you mentioned.
On your
"K&R and other good C references describe their public interface well and that's all you need to know to use them effectively."
I want more. By analogy, all you need to drive a car is what you see sitting behind the steering wheel, but I also very much want to know what is under the hood.
Generally I concluded that for 'effective' 'ease of use', writing efficient code, diagnosing problems, etc., I want to know what is going on at least one level deeper than the level at which I am making the most usage.
Your example of putting a 100,000 byte array on the stack is an example: Without knowing some about what is going on one level deeper, that seems to be an okay thing to do.
2) My remark about the stack is either not quite correct or is not being interpreted as I intended. For putting an array on a push down stack of storage, I am fully aware of the issues. But on a 'stack', maybe also the one used for such array allocations (that PL/I called 'automatic'; I'm not sure there is any corresponding terminology in C), there is also the arguments passed to functions. It seemed that this stack size had to be requested via the linkage editor, and if too little space was requested then just the argument lists needed for calling functions could cause a 'stack overflow'. A problem was, it was not clear how much space the argument lists took up.
Then there was the issue of passing an array by value. As I recall, that meant that the array would be copied to the same stack as the arguments. Then one array of 100,000 bytes could easily swamp any other uses of the stack for passing argument lists.
But even without passing big 'aggregates' by value or allocating big aggregates as 'automatic' storage in functions, there were dark threats, difficult to analyze or circumvent, of stack overflow. To write reliable software, I want to know more, to be able to estimate what resources I am using and when I might be reaching some limit. In the case of the stack allocated by the linkage editor for argument lists, I didn't have that information.
3) Sure, I could make use of the strings in C as C intended just as you state, just for textural data, but also have to assume a single byte character set.
I thought that that design of strings was too limited for no good reason. That is, with just a slightly different design, could have strings that would work for text with a single byte character set along with a big basket of other data types. That's what was done in Fortran, PL/I, Visual Basic .NET, and string packages people wrote for C.
The situation is similar to what you said about malloc(): All C provided for strings was just a pointer to some storage; all the rest of the string functionality was just in some functions, some of which, but not all, needed the null termination. So, what I did with C strings was just use the functions provided that didn't need the null terminations or write my own little such functions.
As I mentioned, I didn't struggle with null terminated strings; instead right from the start I saw them as just absurd and refused ever to assume that there was a null except in the case when I was given such a string, say, from reading the command line.
It has appeared that null terminated strings have been one of the causes of buffer overflow malware. To me, expecting that a null would be just where C wanted it to be was asking too much for reliable computing.
3) On casts, we seem not to be communicating well.
Data conversions are important, often crucial. As I recall in C, the usual way to ask for a conversion is to ask for a 'cast'. Fine: The strong typing police are pleased, and I don't mind. And at times the 'strongly typed pointers' did save me from some errors.
But the question remained: Exactly how are the conversions done? That is, for the set D of 'element' data types -- strings, bytes, single/double precision integers, single/double precision binary floating point, maybe decimal, fixed and/or floating, and for any distinct a, b in D, say if there is a conversion from a to b and if so what are the details on how it works?
One reason to omit this from K&R would have been that the conversion details were machine dependent, e.g., depended on being on a 12, 16, 24, 32, 48, or 64 bit computer, signed magnitude, 2's complement, etc.
Still, whatever the reasons, I was pushed into writing little test cases to get details, especially on likely 'boundary cases', of how the conversions were done. Not good.
Sure, this means that I am a sucker for using a language closely tied some particular hardware. So far, fine with me: Microsoft documents their software heavily for x86, 32 or 64 bits, from Intel or AMD, and now a 3.0 GHz or so 8 core AMD processor costs less than $200. So I don't mind being tied to x86.
On PL/I: Thankfully, no, it was not nearly the first language I learned. Why thankfully? Because the versions I learned were huge languages. Before PL/I I had used Basic, Fortran, and Algol.
PL/I was a nice example of language design in the 'golden age' of language design, the 1960s. You would likely understand PL/I quickly.
So, PL/I borrowed nesting from Algol, structures from Cobol, arrays and more from Fortran, exceptional condition handling from some themes in operating system design, threading (that it called 'tasking' -- current 'threads' are 'lighter in weight' than the 'tasks' were -- e.g., with 'tasks' all storage allocation was 'task-relative' and was freed when the task ended), and enough in bit manipulation to eliminate most uses of assembler in applications programming. It had some rather nice character I/O and some nice binary I/O for, say, tape. It tried to have some data base I/O, but that was before RDBMS and SQL.
In the source code, subroutines (or functions) could be nested, and then there were some nice scope of name rules. C does that but with only one level of nesting; PL/I permitted essentially arbitrary levels of nesting which at times was darned nice.
Arrays could have several dimensions, and the upper bound and lower bound of each could be any 16 bit integers as long as the lower was <= the upper -- 32 bit integers would have been nicer, and now 64 bit integers. Such array addressing is simple: Just calculate the 'virtual origin', that is, the address of the array component with all the subscripts 0, even if that location is out in the backyard somewhere, and then calculate all the actual component addresses starting with the virtual origin and largely forgetting about the bounds unless have bounds checking turned on. Nice.
A structure was, first-cut, much like a struct in C, that is, an ordered list of possibly distinct data types, except each 'component' could also be a structure so that really was writing out a tree. Then each node in that tree could be an array. So, could have arrays of structures of arrays of structures. Darned useful. Easy to write out, read, understand, and use. And dirt simple to implement just with a slight tweak to ordinary array addressing. So, it was just an 'aggregate', still all in essentially contiguous, sequential storage. So, there was no attempt to have parts of the structure scattered around in storage. E.g., doing a binary de/serialize was easy. The only tricky part was the same as in C: What to do about how to document the alignment of some element data types on certain address range boundaries.
Each aggregate has a 'dope vector' as I described. So, what was in an argument list was a pointer to the dope vector, and it was like a C struct with details on array upper and lower bounds, a pointer to the actual storage, etc.
PL/I had some popularity -- Multics was written in it.
For C, PL/I was solid before C was designed. So, C borrowed too little from what was well known when C was designed. Why? The usual reason given was that C was designed to permit a single pass compiler on a DEC mini-computer with just 8 KB of main memory and no virtual memory. IBM's PL/I needed a 64 KB 360/30. But there were later versions of PL/I that were nice subsets.
It appears that C caught on because DEC's mini computers were comparatively cheap and really popular in technical departments in universities; Unix was essentially free; and C came with Unix. So a lot of students learned C in college. Then as PCs got going, the main compiled programming language used was just C.
Big advantages of C were (1) it had pointers crucial for system programming, (2) needed only a relatively simple compiler, (3) had an open source compiler from Bell Labs, and (4) was so simple that the compiled code could be used in embedded applications, that is, needed next to nothing from an operating system.
The C pointer syntax alone is fine. The difficulty is the syntax of how pointers are used or implied elsewhere in the language. Some aspects of the syntax are so, to borrow from K&R, 'idiosyncratic' that some examples are puzzle problems where I have to get out K&R and review.
To me, such puzzle problems are not good.
I will give just one example of C syntax:
i = j+++++k;
Right: Add 1 to k; add that k to j and assign the result to i; then add one to j. Semi-, pseudo-, quasi-great.
I won't write code like that, and in my startup I don't want us using a language that permits code like that.
You seems to have found some that works well for your needs so everything is good.
The last time I had to write some C, I just refreshed my C 'skills' with K&R and reading some of my old code.
For your
"You seems to have found some that works well for your needs so everything is good."
I agree: I looked at Java early on and didn't like it. From some of the comments and links here at HN, I see that Java has made progress since then. Indeed, some of what I like in Visual Basic .NET (I say ".NET" because there is an earlier version of Visual Basic that is quite different and less 'advanced') seems to have come from Java. So, now I'm glad to have the progress of Java and/or Visual Basic .NET and will return to C only when necessary.
Actually, the last time I worked with C, I wrote only a few lines of it! Instead, I took some Fortran code, washed it through the famous Bell Labs program f2c (apparently abbreviates 'Fortran to C') to translate to C, slightly tweaked the C, compiled it into a DLL, and now call it from Visual Basic .NET.
Maybe what will be waiting for me in the lower reaches is C programming on an early version of Unix without virtual memory and without a good text editor on a slow time sharing computer using a B/W character terminal, 24 x 80!
REF INT i = HEAP INT; # sort of like C++ "new" # REF INT i = LOC INT; # allocates from the stack #
or the shorter forms
HEAP INT i; LOC INT i;
Since heap is the word used in heap sort, it's fair to say that the second use of that word was a misuse. I don't know which use was second and don't really care but did want to know the details of the dynamic memory allocation used by the C malloc() and free(). I just would have appreciated an explanation of malloc() and free() were doing so that could write some code, as I described, to 'help' me monitor what my code was doing with memory. Sure, now writing a good system for 'garbage collection' complete with reference counts and memory compactification is difficult, but what malloc() and free() were doing was likely not very tricky. I just wish K&R had documented it.
It's particularly useful for bugs related to memory.
So, someone else dug into the details of how C manages memory and wrote some code to help people find problems; makes good sense.
By the way, shouldn't the right hand side text on slide 7 (the final part of slide 7) talk about the pointers z and x, instead of the values pointed at? (Aside: How do I write "asterisk x" on HN without getting an italicized x?)
Took me awhile to understand this; single quotes define single characters, and for some C decided to allow multiple character character constants but leave their value as implementation-defined. Discussion: http://zipcon.net/~swhite/docs/computers/languages/c_multi-c...
Lots more ambiguities in C++. But a challenge to find them in C. My favorite: [] are just '+'
I don't know why the author chose to change the syntactic structure of the loop though, since it hides the point.
You have to be careful when counting down though. If you're accessing an array, you might be tempted to do this:
for(size_t i = bar_len - 1; i >= 0; --i) {
foo(bar[i]);
}
It looks innocent enough, but size_t is unsigned, so i >= 0 will always be true. (Of course, using -Wall and -Wextra will warn you about this.)Overall, the presentation is very weak, like from a yesterday's graduate.
I've got quite a bit of experience with C, and I haven't heard of the "static" array size feature before, which seems extremely useful.
So I am glad it was posted, it helped, me, and this comment page was also, something to both smile/laugh and learn something from. Thanks
Screw this 'community'. It sucks ass.
Also, the design of this site is awful. And the engineering skills of the Mr. PG The Greatest apparently suck as well.