

For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.
For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.
So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.
Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.
Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.
So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.
Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.
In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.
I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.
A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.
Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.
Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).
This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.
Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.
All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.
When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.
The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.
Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.
So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.
Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.