As a start, I'm by any means no expert or anything near that in cryptography. I know the very basic about this, enough to more or less choose a method to implement and then read about it so I knew what I was implementing. So please excuse any supposedly dumb questions haha.
Having that in mind, I've created a AES-256/CBC/PKCS#7 + HMAC-SHA512 encryption/decryption class in an Android assistant app I'm making (supposed to have locally (very?) personal things and I might publish it on Play Store, so it's not only for me). That combination is supposed to be highly secure for the next ??(?) years. Might be a bit slower, but I don't mind, at least for now (very few data). Though, I also read that there's one problem with this, which is when the initialization vector gets reused. With CBC, it seems it's not possible to get the data back (right?), as it's possible with other modes, and that's why I chose this one. [If there are any other problems with this method, I'm happy to know about them.]
But it's possible to, after some time, detect patterns and see where messages are equal (equal blocks of 16 bytes). Knowing what that block means, one could know that hasn't changed over time after various encrypted versions, for example.
So I had an idea, which is: all the data I encrypt must be encoded using UTF-7. The remaining byte values (128-255) are used as random values to be put one each 16 bytes, in a random position. For example, in index 4 a 154 byte is added, and in index 19 a 234 byte is added. This way, it's always random, and actually equal blocks in data will be different if the same IV is reused ("random" can repeat values, and I can't check if I've already used them in this case, so I thought on this to prevent problems).
Is this a good approach? Might it mitigate the problem? Maybe solve it completely for the next infinite years and the method would be now completely safe or at least much safer?
Also, if anything I said is wrong, I'm happy to be corrected! Thanks!