This is not a full answer, more some thoughts to try and questions that won't fit in a comment box. Please try to resist the urge to downvote :)
VPC peering is definitely worth a shot, that way traffic stays on the AWS backbone which should reduce latency a bit. I don't know how much that will help. Those three areas are 200 - 300ms ping apart, so you will always have some delays.
I suspect the conversation between client and DB is multiple requests for one insert - eg create connection, connect to specific DB, insert, commit, close. If that's the case reducing latency helps, but cutting out some of the steps is more important. Are you using connection pooling so the connections are already open? I suspect VPC Peering and general optimization this will be a better solution that either of the ideas below.
If there any way you could make the updates asynchronous? If you can put writes into an SQS queue processed in a single region it'll probably be done within a second or two. This might be an optimization over direct database connections, depending how fast it is.
Multi-master is another option, using database native replication features. I'm not entirely sure if you can do this in RDS, but it's maybe worth a look at if it's possible and the advantages / disadvantages. If you expect people to update the same record at the same time you will have to protect against that.
Another option could be sharding, with specific users data on specific databases. That's going to make your application logic more complex though.