Would be nice if this was open sourced so more items could have been added by the community (also framework specific checklists) but I like the concept
One thing I would add which is driving me crazy on mobile / tablet sign up pages
- make sure your email fields are annotated with type="email"
Another common issue is with SSL mixed content waring, so I would also add - make sure to use protocol relative / https only URLs
(with a reminder to NOT use protocol relative URLs in email templates, your outlook users will appreciate it)(Which should be part of the checklist, check your friggin' logs instead of assuming you never miss anything.)
Just to share, since you mention framework specific, a similar concept exists since a while ago for the PHP symfony (version 1) framework (not official, but I quite liked it back then)
http://www.nngroup.com/articles/top-10-mistakes-web-design/
just got new styling the other day, as I work on updating my seventeen-year-old personal website.
There are still a LOT of websites that make several of those top ten mistakes. They are higher priority than many of the other issues mentioned on the checklist kindly submitted here. As other comments here have pointed out, it's desirable in a checklist to establish priorities.
Use your own site and make sure you don't hate it yourself.
When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.
If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free.
http://developer.yahoo.com/performance/rules.html#cookie_fre...
1) Custom 404 page
2) robots.txt
3) PICS label
4) viewport meta-tag
5) Google Rich Snippets
6) Fails the recommended CSS validator
Remove 'www' subdomain
is just harmful. force 'www.' instead. why? shitty URL parsers, marketing people and DDOS attacks, that's why.let's imagine you write a
- blog post
- blog comment
- press release (distributed via free and paid press release services)
- mail
- word
- forum post
- ...
- ...
if you have a non-www URL it's a game of chance, your in text "whatever.tld" domain will get transformed into a clickable link. yes, a lot of modern URL parses will transform whatever.com into a clickable link, some will even transform whatever.in into a useable link, but a lot of old, shitty, idiotic, strange URL parsers won't. and well, a big part of the web, i would say most of it, is not up to date. so using non WWW will lead to a loss of inlinks and to a poor user experience of users who want to reach your site, but can't click on the in-text-domain (they need to copy/paste instead)and the situation will get worse with the new commercial TLDs to come.
yes, you can - in most cases - force a domain to link conversion in most CMS if you write http:// in front of it. but well, in a promo text most marketing/pr people will not write "and http://whatever.tld has a new feature to give people endless bliss" they will write "whatever.tld has a new ....".
oh, and by the way. whenever a journalist will write a piece about you, in print or online, they will always (or at least in a lot of cases) write www in front of your domain anyway. yeah, that's not an issue if you have redirects in place, just annoying if you have an non-www webproperty.
plus
having a subdomain is another layer of defenses agains DDOS attacks. see this discussion on hacker news from may 18 2011 (my birthday by the way) http://news.ycombinator.com/item?id=2575266
go for www.
i consulted a sh-tload of companies on this question (and yes, i also think i have better things to do), any company that chooses non-www URLs regrets it down the road.
I'm sure just about anyone who has used the web for any length of time has hit the standard apache "Not found" page hundreds of times now and pretty much knows what it means.
Custom 404 pages of often quite confusing as they will try to be clever and redirect you to other content that may be interesting. Sometimes these aren't clear and give the impression that the link was not broken and that this is where the site designer intended you to go which leaves you looking around the page for the content you thought you were going to get.
I agree, however I also believe that is the intent of filing it under "usability". It isn't usability as you would commonly define it, a good UX, but rather keeping the UX of the site consistent across all states, even failure, and giving the user an entry point back in to the rest of the site. A default Apache 404 does not do this, it's just a flat white page, with your only option being to go back from whence you came. If that wasn't your site, then the perception is you've lost a potential visitor, and that potentially could've been avoided with a custom 404 page.
It's less confusing and keeps people on site.
Is the author just ignorant, or am I a fool thinking that if anything it should be "Security" which has the most elaborate items?
Lowest level 1a has 22 things to verify, highest level 4 has 121 things to verify. That's a lot of checkboxes.
Uh...I think that can be broken down to at least two different things...
This to me is like a checklist of things to automate. Is there any "build" system for the web?
http://site-analytics.org/ The intro is already outdated, I'll make a new one very soon.
Just have to create a new project with the template for each launch and then work your way down.
How about setting up automated backups?
Automated backups also aren't necessary for all sites, particularly if the entire site is in a source repository somewhere and doesn't have users.
You need to backup production sites. A repo could do that, but it's just another backup system that needs to be implemented and verified.