The short answer is that it does have an impact. It's a two level mechanism where the first level (task) optionally provides an outer boundary and the second level (container) provides inner boundaries within the first level (if you do not specify a task level boundary the whole host is considered as first level boundary).
The long answer is included in this blog post
Containers resources configurations with a task size explicitly configured
In this particular scenario, the task becomes itself a solid boundary around the container(s) that can run inside that task.
In this scenario, the containers, running in this task configuration, are only able to use the capacity defined by the task size, meaning they see the task as being their boundaries. To be correct, containers still see the total capacity because they can read /proc, but that total capacity is not usable for them.
From a memory management perspective, the important difference is that containers do not need to have any type of memory limit configured. In this case, they all compete for the amount of memory available at the task level. [...]
If you configure limits at the containers level, the sum of the memory soft limits of all containers running inside this specific task cannot exceed the memory size of the task. [...]