I'm sorry, but Timsort is a bit of a hack. It's a "this seems to work well" algorithm, and it shows. It took 13 years until its claimed running time was finally proven in 2015. The four (originally three) rules for merging sequences from the stack are rather arbitrary. Multiple issues were found well after it was already widely deployed.
Recently, it was also shown that Timsort doesn't optimally use the information it has about runs. As an alternative, powersort was proposed, which seems to outperform Timsort both on randomly ordered inputs as well as inputs with long runs: https://arxiv.org/pdf/1805.04154.pdf
Totally! And there are much better algorithms for the internet routing protocols, very well documented and tested and all, still just sitting on desks under paperweights...
People were still finding bugs in common implementations of timsort as of 3 years ago. It's not unreasonable to stick with a somewhat more conservative choice for a core library function until there's more reason to have confidence in the implementations of timsort.
Didn't one of the most simple algorithms, binary search, suffered of a bug in a standard library (was it Java?) a few years ago? If IIRC it was a corner case, I should check it because I don't recall the details, but it looked robust code.
If I recall correctly, the bug only applied for arrays/lists of length greater than 2 to the more-than-astronomical. Not something that anyone ever encountered in the wild, because current hardware doesn't have enough memory.
Edit: The article linked in the other comment says the Java dev team didn't even bother to implement the "proper" fix, but merely adjusted how much space is allocated.
Being pretty certain isn't the same as being certain. I really don't care what most libraries do as long as they document what exactly they have chosen to do. Go's `sort.Sort(data Interface)` definitely does not shuffle.