I have seen that one can sometimes use $B=HNF(G)$, but it might not always work to get a bad basis, and therefore it is not explicit (nor very clear to me...).
Let $\mu : \mathbb{R}^{n\times n}\to \mathbb{R}_{\geq 0}$ be a measure of the quality of a basis.
For example, one could have $\mu(B) = \lVert LLL(B)_1\rVert$ be the norm of the first coordinate of the LLL reduction of $B$ (as you suggest), but there are many other quality measures possible.
The point of using the HNF is not that it leads to a basis that is "the worst" abstractly (for most quality measures, multiplying by a random unimodular matrix of high operator norm would likely lead to a very low-quality basis), but that it is the worst that any adversary will plausibly use.
This is because of the following simple result.
Let $\mu$ be efficiently computable.
Let $A$ be an efficient adversary that, on input $B$, computes some $U$ such that $\mu(BU) \leq f(B)$ for some bounding function $f$. Then, there exists an efficient adversary $A'$ that computes $U$ such that $\mu(BU) \leq \min(f(B), f(HNF(B))$.
Notably, $HNF(B)$ is an invariant of the lattice, and not the particular basis used.
Therefore, the above result says that the worst quality basis an adversary can find is at most $f(HNF(B))$, so giving the adversary a non-HNF basis may only lead to them computing a better quality basis than they would have with an HNF basis.
The adversary $A'$ is easy to describe.
It simply runs $A$ on $B$ and $HNF(B)$, and returns the basis that (of these two) minimizes the quality measure.
This does require that the quality measure be efficiently computable, but in most cases this is true.
Edit: In response to your clarification in the comments, I'll point you to a partial answer, which is "in principle yes, but I do not have an explicit matrix on hand".
The following is from Chapter 3 of Seungki Kim's thesis.
Theorem 3.4 Let $n$ be sufficiently large, $p\in (0,100)$, and suppose $k < \frac{n}{6(\log n + K(p))}$ is a positive integer, where $K(p)$ is a constant depending only on $p$. Then at least $p\%$ of $n$-dimensional $(1, 1/2)$-LLL bases have
$$\forall 1\leq i \leq n-1 : \frac{k-1}{k}\frac{1}{2}\leq |\mu_{i+1,i}|\leq \frac{1}{2}.$$
Here "most" requires technical care to get right (see Kim's thesis for details).
Kim goes on how to discuss this result is counterintuitive, in the sense that it implies that for "most LLL bases $B$", $\lVert LLL(B)_1\rVert \approx (4/3)^{n/4}\approx (1.0745)^n$, the worst-case bound for LLL.
But it is also known that, for most distributions over lattice bases $B$, $\lVert LLL(B)_1\rVert \approx 1.02^n$ --- what gives?
The claim is that $LLL$ is somehow biased, in the sense that it does not transport the aforementioned distributions over lattice bases to the uniform distribution over LLL bases.
Therefore, to find bad LLL bases, it makes sense to (directly) sample matrices that satisfy the LLL condition (rather than attempt to use LLL to compute bases experimentally).
I haven't thought if the natural way to do this (one vector at a time, uniformly at random from a suitable set, conditioned on preserving the LLL condition) will yield the desired result.
There may be a more intelligent way to uniformly sample an LLL basis (which, by Kim's result, is likely a bad basis).
But, in light of Kim's thesis, it is a reasonable thing to attempt to do.
It's worth mentioning that Kim's result is (at minimum) an existence result.
Plausibly, the reason LLL behaves well on average is that the worst-case bound is loose.
Kim's work shows this is not true though.