It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Just an idea: Have you ever thought about running the archive through Gzip? That would reduce loading times significantly and would reduce the effect of changelogs as mentioned earlier.

I started doing so recently with my own JSON based DB, since customers put it on network drives and started complaining about loading times. Zipping the whole thing did shut them up.
avatar
neumi5694: Just an idea: Have you ever thought about running the archive through Gzip? That would reduce loading times significantly and would reduce the effect of changelogs as mentioned earlier.

I started doing so recently with my own JSON based DB, since customers put it on network drives and started complaining about loading times. Zipping the whole thing did shut them up.
I dunno if it'll actually work in this scenario, Python's zip support isn't exactly mature (eg no multi-archive support) , so the decompression might be more overhead than the improved drive to memory transfer. I can do some benchmarking on it I suppose.
avatar
Kalanyr: I dunno if it'll actually work in this scenario, Python's zip support isn't exactly mature (eg no multi-archive support) , so the decompression might be more overhead than the improved drive to memory transfer. I can do some benchmarking on it I suppose.
That's what I meant with gzip, which is only a zip stream without file architecture like normal zip files have. It's just one compressed file.

Instead of wrting directly to the disc you send it through a zip stream and the file ends up compressed. When reading, you don't read it directly, but as zip stream (if that fails, it's one of the old archives, which you then read as normal text file). At least that's how I did it.

If you don't need random access to the file, there should be no difference in how it's handled. If random access to single lines is necessary, it would be better to cache the file anyway for seserializing.
Post edited January 30, 2024 by neumi5694
avatar
Kalanyr: I dunno if it'll actually work in this scenario, Python's zip support isn't exactly mature (eg no multi-archive support) , so the decompression might be more overhead than the improved drive to memory transfer. I can do some benchmarking on it I suppose.
avatar
neumi5694: That's what I meant with gzip, which is only a zip stream without file architecture like normal zip files have. It's just one compressed file.

Instead of wrting directly to the disc you send it through a zip stream and the file ends up compressed. When reading, you don't read it directly, but as zip stream (if that fails, it's one of the old archives, which you then read as normal text file). At least that's how I did it.

If you don't need random access to the file, there should be no difference in how it's handled. If random access to single lines is necessary, it would be better to cache the file anyway for seserializing.
Nothing is written randomly at the moment, so I'll give that a shot when I get a chance.
avatar
neumi5694: That's what I meant with gzip, which is only a zip stream without file architecture like normal zip files have. It's just one compressed file.

Instead of wrting directly to the disc you send it through a zip stream and the file ends up compressed. When reading, you don't read it directly, but as zip stream (if that fails, it's one of the old archives, which you then read as normal text file). At least that's how I did it.

If you don't need random access to the file, there should be no difference in how it's handled. If random access to single lines is necessary, it would be better to cache the file anyway for seserializing.
avatar
Kalanyr: Nothing is written randomly at the moment, so I'll give that a shot when I get a chance.
If implemented, I recommend making the gzip solution optional, as I believe it might be an unnecessary complication for people who want to edit the manifest manually.
Something interesting just happened.

Due to the ongoing issues, the library cannot be filtered or searched. The movie section link doesn't work either, yet this has somehow caused GOGREPOC to suddenly discover all my movies and add them to the manifest. I wonder if it'll be able to download them lol.

Nice to see GOG finally implement the long-awaited movie support to GOGREPOC xD.
avatar
SargonAelther: The movie section link doesn't work either
I'm not sure which link you are referring to, but the two I am familiar with work for me:
https://www.gog.com/en/account/movies
https://www.gog.com/en/movies
I'm referring to the library. See attachment.

As for GOGREPOC, it had entries such as:
( 2 / 63) fetching game details for angry_video_game_nerd_the_movie...

And it is indeed downloading the movies now, not that I'm complaining. It's cool to see a GOG glitch accidentally cause a new feature in GOGREPOC.
Attachments:
Post edited February 10, 2024 by SargonAelther
avatar
SargonAelther: I'm referring to the library. See attachment.
That's the first link posted by mrkgnao and it works fine for me as well.
Maybe they have multiple data centres for various regions and some are more broken than others.
Post edited February 10, 2024 by SargonAelther
Considering the (still) ongoing issues, is it a bad idea to run gogrepo? Have you noticed oddities like missing games etc. (ie. gogrepo marking some games orphaned, I guess?).

Someone wondered whether GOG is reindexing their whole game database at the moment or something, not sure how that will affect us. At least the alphabetical order doesn't or didn't work, but maybe gogrepo doesn't care about that. EDIT: Well, now it also seems to work, it didn't yesterday...

At least with gogrepo you can track that which games have vanished, as long as you have a pretty fresh update to your local repository... The "number of games" on my account page seems believable too, but I can't recall for sure what it was exactly. Maybe just keep the earlier manifest file to count the number of games at that time, and then checking what I've bought after that date.
Post edited February 10, 2024 by timppu
avatar
timppu: Considering the (still) ongoing issues, is it a bad idea to run gogrepo?
IMO it seems a bad idea for both the manifest and for Gog
avatar
SargonAelther: Something interesting just happened.

Due to the ongoing issues, the library cannot be filtered or searched. The movie section link doesn't work either, yet this has somehow caused GOGREPOC to suddenly discover all my movies and add them to the manifest. I wonder if it'll be able to download them lol.

Nice to see GOG finally implement the long-awaited movie support to GOGREPOC xD.
After some testing and asking around, I've come to the conclusion that the reason GOGREPOC discovered movies for me is because I have purchased too many games. The fact that I crossed the magic number during the maintenance is just a coincidence.

Apparently, once you pass ~4060 library entries (Games and Movies included), likely 4096, the library breaks somewhat. Search stops working, sorting stops working, and game and movie libraries get merged, see screenshot. I asked another user with a massive library and they confirmed the library issue. So due to this merger, GOGREPOC can now see my movies as well lol.
Attachments:
Post edited February 13, 2024 by SargonAelther
Just to be sure, the currently recommended fork is the one from Kalanyr, isn't it?
https://github.com/Kalanyr/gogrepoc
This would be the most up-to-date, bug-free, full-featured version of gogrepo, right?
avatar
park_84: Just to be sure, the currently recommended fork is the one from Kalanyr, isn't it?
https://github.com/Kalanyr/gogrepoc
This would be the most up-to-date, bug-free, full-featured version of gogrepo, right?
Right.