Are you sure you want to do this at the fastq level? (I don't understand why you want to do this at all) Do you really want to count every sequence with a one-off error as a unique sequence?
The typical thing to do would be to align your reads to their reference, then use picardtools MarkDuplicates.
But if you really want to get unique sequences in the raw fastq:
zcat my.fastq.gz | awk 'NR%4==2' | awk '!x[$0]++' > unique.txt
clumpify.sh from BBMap suite. You can use fastq data as is. I suggest you do no other manipulations. See: A: Introducing Clumpify: Create 30% Smaller, Faster Gzipped Fastq Files
You can choose to allow one or more errors. Separate PCR/optical duplicates.