All you need to do, in broad strokes, is:
* Set up a VPC. Defaults work.
* Create an AWS instance. Make sure it has a dedicated IAM role that has a policy like this [1], so that it can do things like create ELBs.
* Install Kubernetes from binary packages. I've been using Kismatic's Debian/Ubuntu packages [2], which are nice.
* Install Docker >= 1.9 < 1.10 (apparently).
* Install etcd.
* Make sure your AWS instance has a sane MTU ("sudo ifconfig eth0 mtu 1500"). AWS uses jumbo frames by default [3], which I found does not work with Docker Hub (even though it's also on AWS).
* Edit /etc/default/docker to disable its iptables magic and use the Kubernetes bridge, which Kubelet will eventually create for you on startup:
DOCKER_OPTS="--iptables=false --ip-masq=false --bridge=cbr0"
* Decide which CIDR ranges to use for pods and services. You can carve a /24 from your VPC subnet for each. They have to be non-overlapping ranges.* Edit the /etc/default/kube* configs to set DAEMON_ARGS in each. Read the help page for each daemon to see what flags they take. Most have sane defaults or are ignorable, but you'll need some specific ones [4].
* Start etcd, Docker and all the Kubernetes daemons.
* Verify it's working with something like: kubectl run test --image=dockercloud/hello-world
Unless I'm forgetting something, that's basically it for one master node. For multiple nodes, you'll have to run Kubelet on each. You can run as many masters (kube-apiserver) as you want, and they'll use etcd leases to ensure that only one is active.
[1] https://gist.github.com/atombender/3f9ba857590ea98d18163e983...
[2] http://repos.kismatic.com/debian/
[3] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_m...
[4] https://gist.github.com/atombender/e72c2acc2d30b0965543273a2...