Using an ALB / Application Load Balancer sounds appropriate for this use case. It's correct for most http(s) load balancing use cases in AWS, though some edge cases it can't do.
The ALB has a fairly flexible architecture in terms of listeners for incoming connections/port and target groups. You should be able to do what you need. However, once you get into AWS and understand how things are set up I'm not sure you'll need to do that, but it depends how your solution is architected.
Having one server per Tomcat instance sounds sensible to me. Put one in each availability zone for redundancy. Be aware that if you enable cross zone load balancing (default ON if created in the console) the load balancer in AZ-A can send traffic to the container in AZ-B which incurs inter-AZ traffic fees. AWS says it tries to keep traffic within the availability zone where possible. Keep an eye on your bill, turn the feature off if needed. You should be considering autoscaling and health checks.
Instead of instances you should consider containers, which you can run on EC2 instances you manage or Fargate containers. Fargate containers cost a little more but AWS manages them, so it saves you some effort. AWS Elastic Container Service (ECS) is more than sufficient for most use cases, you don't need EKS / K8S and all the complexity it brings for most use cases.
You could also simply migrate your servers into AWS. That usually works, but in many cases is a first step to rearchitecting a solution to work in a more cloud native manner.
If you don't understand some of the terms in my answer you're welcome to ask questions, but I also suggest getting some training. AWS is a complex enterprise environment and it's easy to get things wrong, run up a large bill, or miss something that creates security issues. Training for the AWS Architect Associate certificate would be suitable. There are plenty of training providers around, I like Adrian Cantrill training especially for beginners, but there's also Cloud Guru and many others.