In principle, the goal of symmetric cipher designers is that there should be no algorithm solving the cipher faster than well-optimized brute force search (both classically and quantumly, where the quantum equivalent of brute force search is usually taken to be Grover's algorithm). If that goal is achieved, then by assumption machine learning cannot provide a shortcut attack or help to find such an attack, simply because no such attack exists. The best that machine learning might do in that situation is to aid the designer of a brute-force solver in squeezing the maximum amount of performance out of the energy/silicon/development work they are willing to invest. That help might still be extremely useful to an attacker who is trying to run an attack that is just at the edge of feasibility otherwise, but it won't put a work factor $\approx 2^{127}$ attack (i.e. breaking AES) into their reach.
However, for practically relevant cryptographic primitives, there is currently no proof that no efficient algorithmic break exists. What we do have are:
- proofs that show cryptographic schemes secure with regards to specific security requirements under the assumption that all components are secure or that certain computational problems that are not themselves cryptographic (such as integer factorisation) are hard,
- proofs that certain attack strategies cannot work (e.g. no useful single-trail linear distinguishers against more than n rounds of cipher X)
- strong heuristic arguments that simple extensions of these strategies don't work (e.g. no differential distinguishers of any kind against cipher X under the Markov assumption),
- very skilled cryptographers trying to break the primitive in question using all techniques available to them (including ML) and only succeeding for reduced versions, with some security margin left.
ML techniques can be useful as one of a number of tools at the disposal of a cryptanalyst to help figure out the security properties of parts of a cipher. The most popular way ML is used in cryptanalysis is currently the learning of differential-like distinguishers against parts of small block ciphers. In that function, ML methods have certainly found unexpected things. However, the construction of state-of-the-art attacks from the resulting distinguishers then still requires a significant amount of cryptanalytic expertise. A recent example is e.g. this paper from Asiacrypt 2022.
Of course, AI/ML methods can also be used to break cryptographic implementations instead of analyzing the underlying algorithms. Neural networks have, for instance, been wildly successful at exploiting side-channel leakage from cryptographic implementations. These attacks basically exploit the fact that the circuit which runs a cryptographic algorithm will as a by-product of its operations consume power or emit electromagnetic radiation or even sounds and that these physical side-effects of computation carry along information about the secrets processed. Using sufficiently sensitive measuring equipment and clever processing (that is where the neural networks come in), this can practically break real cryptographic implementations. Again, however, state of the art attacks still require very significant expertise on the part of the analyst.
There is a lot of literature on countermeasures against these attacks. The countermeasures are not designed to block AI-based attacks specifically, but to try to block any exploitation of compromising emanations. The countermeasures known can most certainly be made effective against a wide range of realistic attackers, but this comes at a cost. Reducing that cost is very much an area of active research.