This gets into combinatorial search stuff that is hard to explain concisely, but here's a programming analogy:
Ask two programmers to develop the same huge project. Tell one to do it in JavaScript, and another to do it in C# on Windows. You will get two radically different architectures due to how these languages affect the way the programmer traverses combinatorial search space. The JS programmer might give you a bunch of Dockerized micro services that runs against a NoSQL database, while the C# programmer might give you an OOP-based monolith built around an ORM. Those two languages "want" to become those things-- it's built into their structure.
So a different biology might, for example, be biased toward less or more biodiversity, or different levels of evolutionary variation, or a different trade-off between stability and adaptability, or different levels of radiation tolerance, or different average life spans due to more or less chemical stability, etc. That in turn might yield radically different biospheres with radically different distributions of species. If it gets complex it might yield a radically different form of intelligence, like a hive mind or a distributed system or maybe something we can't even imagine. Or... some initial starting positions might never yield intelligence at all. For all we know most initial setups have a low probability of ratcheting up to this level of complexity.
We only have one data point, so we have no idea. It's like using one example of a hurricane to generalize about the behavior of all possible cyclonic storms in all possible atmospheres in the entire universe.