First off, ZFS needs all toplevel vdevs to be functional in order for the pool to operate. If one vdev goes offline, you will lose access to all of the data in the pool. You are using individual disks as vdevs, so if that disk fails (as opposed to its current state of "many read errors"), you will have to recreate the entire pool from scratch.
If you are on Solaris or if you are using OpenZFS 0.8 or later, you should be able to run:
zpool remove tank ata-ST2000DM001-1ER164_Z4Z0xxxx
This might not work! And if it does, it might give a permanent performance degradation for the pool.
vdev removal requires there to be enough space on the remaining disks for the displaced data. It looks like you probably have enough room in this case, but I'm mentioning the problem for completeness.
On OpenZFS, at least, there are a number of restrictions on when you can remove vdevs. You can only remove a vdev if your pool consists solely of single-disk vdevs and/or mirrored vdevs. Your pool qualifies, because you're using single-disk vdevs exclusively. But if you had any raidz, draid, or special-allocation vdevs on OpenZFS, you wouldn't be able to do this.
A final caveat is that removing a vdev incurs a permanent performance penalty in OpenZFS. OpenZFS will record an internal table for all of the blocks that had previously been on the removed disk. For as long as those blocks exist in the pool from that point on, all access to them will require an indirect lookup through the remapping table. This can slow down random access significantly. I don't know enough about Solaris ZFS internals to be able to say whether it does anything similar.
And, of course, ZFS will need to read all of the data from the failing disk in order to remove it. It is entirely possible that it will encounter enough errors during that process that it will simply fail the disk. If that happens, as discussed earlier, the entire pool will go offline and will likely be unrecoverable.
If you have an available slot to add a disk, you might be better off putting in a spare disk and using zpool replace
to substitute the new disk for the failing one. That will incur the same read load to copy the data off (and will bear the same risks of the single disk failing during the process), but if it succeeds you won't need to worry about the potential drawbacks that come with vdev removal.
In general, ZFS can be very brittle when used as you are—with non-redundant single-disk vdevs. There's an old joke that the zero in RAID0 is how much you must care about your data. A ZFS pool of single-disk vdevs is essentially the same as RAID0 from a data security standpoint. A failure of any single disk will likely cause you to lose all of your data. Even if you can afford to replace the data, make sure you're taking into account the time required to do that replacement. If you can afford a performance penalty traded off for data security, consider putting your future pools' disks into raidz2 vdevs. If you can afford to trade usable disk space for data security (and possibly increased read performance), consider putting your future pools' disks into mirror vdevs.