As others have mentioned, for entropy to be well-defined, you need to have some underlying probability distribution $D$.
If you are willing to look around some unsavory places, you could look for a moderately-large data breach to get an empirical distribution of passwords that you can compute the entropy of.
Alternatively there might be some user studies that have computed things like this.
The main point of this answer is to mention that there are multiple inequivalent notions of entropy, and that the traditional (shannon) entropy is not always the best one in cryptography.
Shannon entropy is defined as
$$H(X) = \sum_{x\in \mathsf{supp}(X)} p(x)\log(1/p(x))$$
Another fairly-common notion of entropy is the min-entropy, defined as
$$H_\infty(X) = \max_{x\in \mathsf{supp}(X)} \log(1/p(x))$$
This roughly captures the most likely output under $X$.
It is often much better to use for cryptographic purposes, which can be demonstrated via a simple example.
Let $X$ be a random variable that is
- $0$ with probability $1/2$, and
- uniform over $\{1,\dots,2^k\}$ with probability $1/2$.
The min entropy of this is very small (it is $1$).
The shannon entropy of it is much larger (I believe it is something like $k-O(1)$, I am too lazy to compute the constant).
So if you are measuring the quality of a distribution over passwords via
- min entropy, you will think that $X$ is quite bad, vs
- shannon entropy, you will think $X$ is quite good.
Of course, half of all users sampling passwords from $X$ can be trivially attacked, so it should perhaps be considered "bad" (in accordance to what min entropy would predict).