"16-bit Real Mode with insruction pointer pointing to address 0xffff.fff0, the reset vector. In this initial mode, the processor has first 12 address lines asserted, so any address looks like 0xfffx.xxxx. This fact combined with how addressing using segment selector (SS) register works, allows the CPU to access instruction at reset vector address 0xffff.fff0"
SS is the Stack Segment register. CS is for code.
In real mode, the instruction pointer is only 16-bit and can not hold a value in excess of 0xffff.
Ignoring those issues, the explanation still doesn't match what I've seen before in the documentation. If things have changed, when did that happen? The old explanation:
The CS base is set to something like -16, that is with all but the lower 4 bits set. This covers all of physical address space, with any higher bits just being ignored. The instruction pointer is set to 0. The result is execution that starts 16 bytes below the first address that is beyond the end of the physical address space. For example, with 44-bit physical addresses this would be at 0x00000ffffffffff0.
regarding how the CPU addresses 0xffff.fff0 is not exactly specified in the post. actually CS register is loaded with 0xf000 and normally this would yield a segment selector address of 0x000f.0000 (CS left-shifted by 4 bits). but on a reset, like the post mentions, first 12 address lines are asserted so the base address ends up being 0xffff.0000. these address lines remain asserted until a long jump is made, after which the first 12 address lines are de-asserted and normal CS segment selector calculation resumes.
instruction pointer contains -16 as you mentioned, the resulting address is:
base address + IP = 0xffff.0000 + 0xfff0 = 0xffff.fff0
i am not sure if this is worth adding to the post but it is definitely useful.
So at reset, CS is set to a descriptor whose numeric value is 0xf000 and whose base address is 0xffff0000, or something to that effect. All the rest follows naturally -- there's no special case logic that asserts lines of the address bus until the first long jump, it's simply that the reset value of the CS descriptor is rather magical, and that long jumps by their nature load a new CS segment descriptor which isn't magical.
Unless something has changed in recent hardware, there aren't 12 address lines just asserted. This is a side effect of the CS base being a particular value.
An important thing to realize is that x86 has hidden registers associated with segments. These registers get set when a segment selector register is loaded, not when it is used. The CS base is one of these hidden registers. If CS is loaded in protected mode, the base comes out of the descriptor table, and it remains when switching back to real mode. (this is the "unreal mode") If CS is loaded in real mode, the base comes from the selector shifted left, and this base remains even if you switch to protected mode. Switching modes doesn't change a segment base. Loading segment registers is what changes a segment base.
So initially, the CS base is not set in a way that matches what you would get if you loaded the CS selector value that is seen. It is set to a value that is possibly 0xfffffff0, 0x00000ffffffffff0, 0x0000fffffffffff0, or 0xfffffffffffffff0. The older documentation I've seen would use the largest of those values. I suppose it could then be cut down to 32-bit by the bottleneck that is normally a part of addressing when not in long mode. This is the sort of area where Intel, AMD, and others may differ.
Perhaps there is a hardware debugger for x86 (like a JTAG debugger) that would show the initial CS base. One could also guess that Simics or VMware might be correct, disassembling them to find out what they use. Another idea is to examine the badly-documented state used by the virtualization instructions.
And yes, one has to be careful about outdated information.
* https://superuser.com/a/347115/38062
* https://superuser.com/a/695716/38062
AFAIK this is a completely new behaviour and only for newer versions of ME; the older versions still boot the main CPU like a 386 did, and the ME processor is a separate thing (Don't quote me on this; just information I gathered from brief research.)
This included how the system handed of control to the OS. BIOS just loaded the first disk sector and executed whatever it found there. MBR-based partition tables were a DOS-convention, the BIOS couldn't care less what the first disk sector did once it was in control.
When UEFI, the new boot interface, was invented, we needed a name for the old boot conventions. So we called them BIOS.
It's hard to claim the article confuses anything if the original word BIOS has such a confused meaning to begin with. If you say BIOS=PC firmware except the option ROMs, the article is correct.
And the name is even more confusing than you paint it to be. The "BIOS" was also the bottom-half of MS/PC/DR-DOS, contained in IO.SYS in (pre version 6) MS-DOS and in IBMBIO.COM in PC-DOS and (post version 3) DR-DOS.
Or 64-bit mode, on most current systems.
having said that it is perhaps worth clarifying in the article that classic BIOS would hand off in 16-bit mode :)
Uboot is responsible for this in some chips.
But I guess this might not be the only fundamental difference.
(I have thought to utilize the SRAM for some optimization later at system runtime because it should be incredibly fast. At least if you don't have to care about power consumption that should be possible or does the specification require to turn it off?)
My understanding has always been that initializing DRAM consisted of two things:
The BIOs had to enumerate how much physical memory that motherboard had installed. And then to test that that memory is working by writing a bit to each location and reading it back.
Would this be accurate?
Would also be worth noting that many BIOSes allow you to hit the space bar to skip memory initialization presumably because it somewhat time-consuming.