You shouldn't need to worry about this difference.
On Linux, the command you mentioned will note down the current user's UID into the airflow environment config, so that the application starts under that user. This makes it so that all the configuration files and folders generated by the application are owned by the logged-in user, as they're mounted from the user's current directory to the docker container.
Note: All I've written below assumes on windows you'd be using linux containers. If you are running windows-native containers, you'll need to check if there's specific airflow instructions for that.
On Windows, the most common way to run docker containers is to run them within a WSL "virtual machine", where a fraction of your computer's resource is used to boot a linux distribution inside Windows.
If your developers are already using WSL directly, and have docker installed inside the WSL instance, then the very same steps apply. As far as airflow is concerned, it's running within a linux host, and it doesn't mind the fact that this is all within yet another windows application. This is likely to be the case if you/your developers are used to a linux-focused development workflow, even within Windows.
If on the other hand you're mostly windows-focused, your developers are likely using Docker Desktop or a similar solution. Most if not all of these will bring up a linux virtual machine (likely using WSL themselves), and run docker within the linux environment it provides. In this case, the dags
/logs
/plugins
directories will be on a windows folder, which is then mapped to the WSL distribution, where it's mounted within the container. In this case, when WSL translates the windows-side files for linux-side access, it maps "whichever user is logged in on windows", and the problem you're concerned with disappears.