Suppose Alice and Bob choose a number face to face. Let's call it "97"
Alice's original message is "Where did you study?"
Suppose we have an artificial intelligence. Let this artificial intelligence produce 1000 meaningful messages
1. Message: "You were so good at school"
2. Message: "My uncle came to us. I told my uncle about you"
3. Message: "Has your illness passed? Are you better?"
.
.
.
97. Message: "Where did you study?"
.
.
.
1001. Message: "I didn't understand the ontological argument in the book you suggested"
then send it to Bob. Eve can not be sure about what is the original message. But Bob know because of "97" which is secret key.
Let Bob's answer be "I studied in Ukraine"
Let artificial intelligence prepare 1000 meaningful messages again
1. Message: "You know I got over my depression. it was a lucky day"
2. Message: "So what did you say about me? I hope you mentioned that I'm a great person"
3. Message: "I think I'm dying. Life would be better if I didn't have a chronic cough"
.
.
.
97. Message: "I studied in Ukraine"
.
.
.
1001. Message: "I'm available today. Come to my house and I will help you"
I know if Eve knows Bob or Alice, she can cancel out some probabilities. But if the AI's algorithm is good enough, Eve will be truly helpless. I also know that this algorithm does not check if the message has been corrupted by Eve. But it can be overcome quite simply
Just as Vernam has assumptions such as "totally random", assumptions such as "if the artificial intelligence produces messages that are too good for Eve to sift through" can be made in this algorithm.
Isn't this algorithm as secure as Vernam under these assumption(s)?