Reading a file sequentially is faster than random access, and keeping in memory just the information necessary for some calculation is more efficient than keeping the whole file. Some tasks are more easily performed depending on how the bam is sorted, because the bam can be read sequentially and just part of the data need to be kept in memory.
For example, marking duplicates (which, for paired reads, is done by looking at 5' mapping positions of both reads) is a lot easier for bams sorted by position, because you guarantee the reads physically closer inside the bam are also close on the genome. If they weren't, one would need to scan the whole file to build a hash of reads per position in order to mark duplicates.
Conversely, counting reads mapping to features it easier for name-sorted files, as read pairs are next to each other, and secondary / supplementary alignments are next to primary alignments. Again, if they weren't, one would need to scan the whole file to build a hash of reads names per feature mapped.
Of course, most immediately as an end-user, one has to pay attention to which settings are necessary and which sorting order is expected by the tool of choice.