I have one successfully downloaded file and another failed download (only the first 100 MB of a large file) which I suspect is the same file.

To verify this, I'd like to check their hashes, but since I only have a part of the unsuccessfully downloaded file, I only want to hash the first few megabytes or so.

How do I do this?

OS would be windows, but I have cygwin and MinGW installed.

  • 1
    Efficiently comparing one file on a local computer with another file on a distant computer is a key part of rsync, which compares parts of the files with a special hash function. – David Cary 2 days ago
  • @DavidCary In my case, I do not have shell access to the remote computer, but thanks for the hint, I will read the manpage – sinned 2 days ago
up vote 53 down vote accepted

Creating hashes to compare files makes sense if you compare one file against many, or when comparing many files against each other.

It does not make sense when comparing two files only once: The effort to compute the hashes is at least as high as walking over the files and comparing them directly.

An efficient file comparison tool is cmp:

cmp --bytes $((100 * 1024 * 1024)) file1 file2 && echo "File fragments are identical"

You can also combine it with dd to compare arbitrary parts (not necessarily from the beginning) of two files, e.g.:

cmp \
    <(dd if=file1 bs=100M count=1 skip=1 2>/dev/null) \
    <(dd if=file2 bs=100M count=1 skip=1 2>/dev/null) \
&& echo "File fragments are identical"
  • 6
    Note: creating hashes to compare files also makes sense if you want to avoid reading two files at the same time. – Kamil Maciorowski Dec 6 at 12:45
  • 1
    @KamilMaciorowski Yes, true. But this method will still usually be faster than comparing hashes in the pairwise case. – Konrad Rudolph Dec 6 at 12:54
  • 7
    This is the to-go solution. cmp is 99.99% certain to be already installed if you have bash running, and it does the job. Indeed, cmp -n 131072 one.zip two.zip will do the job, too. Fewest characters to type, and fastest execution. Calculating a hash is nonsensical. It requires the entire 100MB file to be read, plus a 100MB portion of the complete file, which is pointless. If they're zip files and they're different, there will be a difference within the first few hundred bytes. Readahead delivers 128k by default though, so you can as well compare 128k (same cost as comparing 1 byte). – Damon Dec 6 at 13:47
  • 19
    The --bytes option is only complicating the task. Just run cmp without this option and it will show you the first byte which differs between the files. If all the bytes are the same then it will show EOF on the shorter file. This will give you more information than your example - how many bytes are correct. – pabouk Dec 6 at 14:13
  • 2
    If you have GNU cmp (and, I think pretty much everybody does), you can use --ignore-initial and --bytes arguments instead of complicating things with invocations of dd. – Christopher Schultz Dec 7 at 14:27

I am sorry I can't exactly try that, but this way will work

dd if=yourfile.zip of=first100mb1.dat bs=100M count=1
dd if=yourotherfile.zip of=first100mb2.dat bs=100M count=1

This will get you the first 100 Megabyte of both files.

Now get the hashes:

sha256sum first100mb1.dat && sha256sum first100mb2.dat 

You can also run it directly:

dd if=yourfile.zip bs=100M count=1 | sha256sum 
dd if=yourotherfile.zip bs=100M count=1 | sha256sum 
  • 1
    Is there a way to pipe dd somehow into sha256sum without the intermediate file? – sinned Dec 6 at 10:10
  • 1
    I added another way according to your request – davidbaumann Dec 6 at 10:15
  • 8
    Why create the hashes? That’s much less efficient than just comparing the file fragments directly (using cmp). – Konrad Rudolph Dec 6 at 12:34
  • In your middle code sample you say first100mb1.dat twice. Did you mean first100mb2.dat for the second one? – doppelgreener Dec 6 at 14:39
  • @KonradRudolph, "Why create the hashes?" Your solution (using cmp) is a winner without a doubt. But this way of solving the problem (using hashes) also has right to exist as long as it actually solves the problem (: – VL-80 2 days ago

You could just directly compare the files, with a binary / hex diff program like vbindiff. It quickly compares files up to 4GB on Linux & Windows.

Looks something like this, only with the difference highlighted in red (1B vs 1C):

one                                       
0000 0000: 30 5C 72 A7 1B 6D FB FC  08 00 00 00 00 00 00 00  0\r..m.. ........  
0000 0010: 00 00 00 00                                       ....
0000 0020:
0000 0030:
0000 0040:
0000 0050:
0000 0060:
0000 0070:
0000 0080: 
0000 0090: 
0000 00A0: 

two        
0000 0000: 30 5C 72 A7 1C 6D FB FC  08 00 00 00 00 00 00 00  0\r..m.. ........  
0000 0010: 00 00 00 00                                       ....               
0000 0020: 
0000 0030:
0000 0040:
0000 0050:
0000 0060:
0000 0070:
0000 0080:
0000 0090:                                
0000 00A0:             
┌──────────────────────────────────────────────────────────────────────────────┐
│Arrow keys move  F find      RET next difference  ESC quit  T move top        │
│C ASCII/EBCDIC   E edit file   G goto position      Q quit  B move bottom     │
└──────────────────────────────────────────────────────────────────────────────┘ 
  • In my case, the files are zip archives, so no meaningful text in there. Comparing the hash value should be faster and less error prone. – sinned Dec 6 at 12:44
  • 2
    If you mean ASCII text, then that's irrelevant. vbindiff (and Konrad's cmp) compares binary data, byte for byte. In fact has values are much more likely to experience collisions – Xen2050 Dec 6 at 13:12
  • * Meant "In fact HASH values are much more likely to experience collisions" in the above comment, missed the h! – Xen2050 Dec 7 at 7:47

Everybody seems to go the Unix/Linux route with this, but just comparing 2 files can easily be done with Windows standard commands:
FC /B file file2

FC is present on every Windows NT version ever made. And (if I recall correctly) was also present in DOS.
It is a bit slow, but that doesn't matter for a one-time use.

I know it says for Bash, but OP also states that they have Windows. For anyone that wants/requires a Windows solution, there's a program called HxD which is a Hex Editor that can compare two files. If the files are different sizes, it will tell if the available parts are the same. And if need be, it's capable of running checksums for whatever is currently selected. It's free and can be downloaded from: the HxD website. I don't have any connection to the author(s), I've just been using it for years.

If you can access a shell session the remote system, then you can break the source file up into pieces using the split command. To split a big file into (binary) bits of one million bytes or less each:

split -b 1000000 bigfile.tgz will create pieces xaa xab etc. From there it is trivial to concatenate the pieces to reconstruct the file:

cat x?? > reconstructed_bigfile.tgz Of course you have control over the names of the file components. I am just illustrating using the defaults.

  • No, the zip download is from an unrealiable ticket system, to which I do not have shell access. – sinned Dec 7 at 13:37
  • -1. The question is "how to compare parts of files?" So how to compare? For now your "answer" is just a comment on how to get parts of files. – Kamil Maciorowski Dec 7 at 13:41

Your Answer

By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Not the answer you're looking for? Browse other questions tagged or ask your own question.