So I started the Google Takeout process because I want to move my photos from Google Photos to Immich.

Google emailed me saying the archives are ready… uh… I have to download 81 zip files, each 2GB big… 😬

Is there an easy way to download all of these files? Or do I have to click “download” 81 times and hope the downloads don’t get interrupted?

  • skoberlink@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    18 hours ago

    I have tried to solve this many times as I want to regularly back up my Google content - mostly the images for the same purpose you mention.

    Unfortunately there is no good solution I’ve ever come up with or found. I even looked into scripting with something like puppeteer. It requires regular confirmation of your authentication and I just haven’t found a good way to solve that since there’s no API access. It also won’t let you use any cli tools like wget. You could probably figure out how to pull some token or cookie to give to the cli but you’d have to do it so often that its more of a pain than just manually downloading.

    My solution currently is to run a firefox browser in a container on my server to download them. It acts as sort of a session manager (like tmux or zellij for command line) so that if the PC I’m using goes to sleep or something, the downloads continue. Then I just check in occasionally through the day. Plus I wanted them on the server anyway, at the end of the day. Downloading them there directly saves me having to then transfer to the server.

    Switching to .tgz will let you make up to 50GB files which at least means fewer iterations and longer time between interactions (so I can actually do something useful in the meantime).

    I sincerely hope someone proves me wrong and has a way to do this but I’ve searched a lot. I know other people want to solve it but I’ve never seen anyone with a solution.