I'm not an expert on this field, but because there is no answer yet, I give it a try.
My university had some projects on data privacy regarding machine learning and data privacy for medical data (are those enough identifiers to identify my university? :D). I would say the projects for medical data belong to big data and the main approaches were k-anonymity (because of simplicity) and differential privacy.
I would count machine learning to AI and those projects tried to secure the privacy of the data on which the networks learn on. As far as I understand it, the same techniques were used to secure the privacy of the input data for learning.
To simplify the ideas: You can assume, that machine learning uses big data to train networks. If the original data is not secure/privat, the resulting network is neither. And vice versa they stated security/privacy of the network (of course with a much more complex argumentation).
But AI and Big Data are buzzwords covering a lot of topics and those projects are only for specific applications (mainly medical). Therefore this may not be correct for different AI-approaches, or even different use cases of machine learning. Nevertheless, I hope that I have been able to provide a little food for thought.