I've had some bad times with external drives & linux.
Don't use ext4 on external USB drives
Do use EXfat on external USB drives
If you have to use ext4 then consider using a loopback-backed-file on an EXfat volume
For large corrupted volumes, do use extra params to skip troublesome blocks in ddrescue - something like --min-read-rate 6000000 --reset-slow --max-slow-reads=40
ext4 filesystems are great. Reasonable cluster size, high reliability, journaled writes. Whats not to like, right? Turns out that they're a bad choice for external USB3 drives. ext4 seems to be built on a high-availability, robust connection to hardware. USB3 is lovely but running on cheaper, low-power hardware (e.g. ARM-based-boards), it can be quite flaky and the high-frequency read/write cycle of ext4 can clash with the unreliability of the bus.
The combination of bus flakiness and high frequency I/O can be enough to trigger a high failure count in the S.M.A.R.T system on many modern hard drives and significantly impacting the drive performance.
EXfat, on the other hand, is designed for these situations. It won't have the same data throughput as our friend ext4 but my experience so far has been that it's way more tolerant than ext4 to bus hiccups.
EXfat is well-supported on linux; when invoking mkfs.exfat on modern terabyte-sized hard drives, I'd recommend using a smaller cluster size (-s 32 if you have lots of little files).
If you really must have an ext4 volume a particularly insane solution is to mount an ext4 volume that is a file on your parent EXfat volume connected to a loopback interface. An example:
# 1Tb
dd if=/dev/zero of=fs.bin bs=1M count=1000000
sudo losetup /dev/loop1 fs.bin
mkfs.ext4 /dev/loop1
sudo mount /dev/loop1 mountpoint
If you've put your drive into a bad state and you want to recover the data, I have a couple of recommendations that are independent of the filesystem on it.
First of all, I'd recommend not re-mounting the drive with write permissions. If you have to mount it then do something like
mount -o ro,noload /dev/sdb1 recoveryMountPoint
The best approach I've found so far is to try to image the whole disk in one shot into a recovery image, putting aside the original disk hardware (before it gets worse!) and then working with the recovery image instead. gddrescue (sudo apt-get install gddrescue) is pretty great for this. In my particular case I had a drive that was unreliable but only mildly damaged; here's the gddrescue script I used:
#!/bin/bash
while :
do
ddrescue --min-read-rate 6000000 --reset-slow --max-slow-reads=40 -vvvv /dev/sdb1 hdimage mapfile
sleep 300
done
ctrl-c that monster when it gets to 100%. The read and reset parameters are to make a quick first pass; the third and fourth passes will then concentrate on the bad sectors.
Finally, run:
ddrescue -vvvv -d -r3 /dev/sdb1 hdimage mapfile
After that, you can mount the volume as a loopback file and try to recover what you can:
sudo losetup /dev/loop1 hdimage
sudo mount /dev/loop1 mountpoint