That the conclusion seems to be preposterous doesn’t necessarily mean that it’s wrong.
Can it be proven wrong? Well, among other factors, that depends on the definition of “mind uploading”. And that’s not very clear cut, since procedures like “sideloading” can lead to relatively equivalent results, as described earlier in this thread.
Let me introduce another concept, which is indeed a bit more abstract: Capability convergence. This means that either humans and machines will have essentially the same capabilities (“weak convergence”), or that humans and machines will converge and actually become one and the same (“strong convergence”). With those terms, the interesting question is: What is needed for the different types of convergence?
It might be that a sufficiently advanced brain machine interface suffices for weak convergence. In the case that antropotent AIs actually require interfacing with human brains in order to unlock all their capabilities, it could also suffice for strong convergence. I don’t see the latter scenario as very likely, at least not in the very long run. There are probably configurations of matter than can do everything (of course that includes generating a human consciousness) that a human brain can do, while using less energy, operating faster, being more portable, and so on.
What actually matters, however, is what is required for weak convergence. Weak convergence is enough to keep humans on the same level as machines. I think the crucial requirement is finding a way to get relevant information out of the human brain. At the moment our “information output speed” lies at few bytes per second (for example by typing on a keyboard). Getting that number up to gigabytes per second, or more, is a very difficult and serious challenge.