There's not really a "relationship" between these two parameters, due to what is known as modulus switching.
Roughly, given an LWE instance $\bmod q$, one can change it into an LWE instance $\bmod p$ at relatively little cost, for a wide variety of $p$.
There are many results along these lines, but I will describe one from Worst-case to Average-case Reductions for Module Lattices.
Let $\psi$ be some probability distribution on $\mathbb{T}_{R^\vee}$, and $s\in(R^\vee_q)^d$ be a vector. We define $A^{(M)}_{q,s,\psi}$ as the
distribution on $(R_q)^d × \mathbb{T}_{R^\vee}$ obtained by choosing a vector a $s\in(R_q)^d$ uniformly at random, and $e \in \mathbb{T}_{R^\vee}$
according to $\psi$, and returning $(a, \frac{1}{q}\langle a, s\rangle + e)$.
MLWE: For an integer $q \geq 2$ and a distribution $\Psi$ over a family of distributions over $K_\mathbb{R}$. The decision version of
the Module Learning With Error problem $M-LWE_{q, \Psi}$ is as follows: Let $s \in (R^\vee_q)^d$ be uniformly random and
$\psi$ be sampled from $\Psi$ ; The goal is to distinguish between arbitrarily many independent samples from $A^{(M)}_{q, s, \psi}$, and the same number of independent samples from $U(R^d_q, \mathbb{T}_{R^\vee})$.
This is more general than RLWE, and reduce to RLWE when $d = 1$.
The family of distributions $\Psi_a$ are a certain elliptical Gaussian distribution, see section 2.3.
Anyway, the modulus switching result is theorem 4.8.
Here, $N = nd$ is the "total dimension" of the MLWE instance.
Setting $n = 1$ recovers the case of RLWE, that you are interested in.
Let $p, q \in [2, 2^{N^{O(1)}}
]$ and $\alpha, \beta ∈ (0, 1)$ such that $\beta \geq \alpha \max(1, \frac{q}{p})n^{1/4}N^{1/2}\omega(\log_2 N)$
and $\alpha q \geq \omega(\sqrt{\log(N)/n})$. There exists a polynomial time reduction from $M-LWE_{q,\Psi_\alpha}$ to $M-LWE_{p,\Psi_\beta}$.
This is all to say that you can reduce from an arbitrary modulus $q$ to another arbitrary modulus $p$, at the cost of inflating the noise rate from $\alpha\mapsto \frac{q}{p}\alpha\sqrt{N}\omega(\log_2 N)$. This isn't totally for free (there is an additional $\sqrt{N}$ factor), but given that the moduli $q, p$ are typically small polynomials in $N$, the cost you pay is relatively small in comparison to parameter sizes.
As a result, there really isn't a relationship between solely the ciphertext modulus (as it is typically called, not the ciphertext quotient) and the dimension, as any relationship also needs to take into account the size of the error distribution.
As for how to actually set all of these things, people typically feed their parameters into the LWE Estimator, which gives some bit security estimate for each particular parameter set.