Often SAN/iSCSI storage is zoned between multiple Linux systems to be used for migrating data, high-availability, etc.
What I commonly see, however, is the tendency to try to mount this storage on all systems, at the same time — this is a no-no. In fact, it’s not enough to just not “mount” the storage. You should make sure the Volume Group (if using LVM) is only activated on one host at a time.
One the storage is zoned, and all systems see it, I generally do the pvcreate, vgcreate, lvcreate, and mkfs on one system. Then, I mount it to make sure it mounts, then unmount it. Next, I run:
vgchange -a n vg_name
This will deactivate the Volume Group, and make sure nothing on that system can access it, until it’s activated with:
vgchange -a y vg_name
This is one extra layer of protection from just not mounting the filesystem, and can prevent filesystem corruption on boot-time fsck’s, etc.
If you need to have concurrent access to the storage, on multiple systems, you will need to use a clustered filesystem solution. As an alternative, NFS could be used as well, depending on your needs.