Some love for the API
|#226 by silence|
2020-09-08 at 03:10
|< report >Recently I got the notifications about new posts in some topics. How can I unsubscribe?|
|#227 by yorhel|
2020-09-08 at 05:42
|< report >|
For example:That Wikidata entry has links to the relevant Wikipedia page in 26 languages, you could fetch that (they have an API) and get more information than a single Wikipedia page would ever give you.
https://en.wikipedia.org/wiki/Clannad_(visual_novel) vs https://www.wikidata.org/wiki/Q110607. I'm concerned that this deprecation will give less relvant data.
Also, would you be willing to add the Store Page for VnInfo to the Api?Store pages are technically release info, but I'll look into it.
Recently I got the notifications about new posts in some topics. How can I unsubscribe?There's a checkbox on your notifications page to unsubscribe.
|#228 by silence|
2020-09-08 at 07:06
|< report >Thanks.|
|#229 by micah686|
2020-09-09 at 00:19
|< report >Yorhel, I see what you mean with the Wikidata pages. However to avoid having to use another API/scrape Wikidata for the wikipedia link, could you reconsider deprecating the wikipedia field?|
That way, there could be the Wikipedia and wikidata field for the users/devs of the API to use. It seems like it would be a little bit of a downgrade in functionality, vs something like the Nsfw/Image flagging changes.
|#230 by soulusions|
2020-09-26 at 11:19
|< report >Hey,|
I've been working with the API quite a bit recently, but I came upon an issue, being that many things such as VAs, Traits and Tags are sent as an ID. I was wondering how I'd go about converting those IDs to strings for example.
EDIT: I'm looking into using the DB dumps right now, but that'd prove quite inefficient (though I guess being able to receive a 1mb JSON file through a RESTful API might not be possible with VNDB)Last modified on 2020-09-26 at 11:26
|#231 by yorhel|
2020-09-26 at 11:27
|< report >VAs you can get with "get staff". For tags and traits you indeed need the tags & traits dump files, but they're relatively small and don't change often enough to warrant an API call.|
|#232 by soulusions|
2020-09-26 at 13:51
|< report >Aye, thanks a bunch yorhel, I got the whole dumps running fine, just need to find an efficient way to store them in memory now...|
|#233 by micah686|
2020-11-13 at 07:19
|< report >Yorhel, I was looking into the Character API section, and I noticed that there was no entry for Age. I checked the API docs, and there is no way to get it from another field, like birthday, which only exposes the day and month. Could you add Character Age to the API?|
|#234 by yorhel|
2020-11-13 at 08:05
|< report >Yeah, the API is lagging behind on the recent DB changes. Added age, cup size and spoiler gender to the "get character" command.|
|#235 by foiegras|
2020-11-13 at 12:41
|< report >#234|
In d11#5.4 spoiler gender uses "gender_spoiler" but I received it with the key "spoil_gender".
|#236 by yorhel|
2020-11-13 at 13:13
|< report >Oops, fixed the docs.|
|#237 by micah686|
2020-11-18 at 05:09
|< report >I have a question related to the tag/traits parents. So I see from Vndb Trait Dump Docs that each trait can have an array of parent IDs. So, would the Top/Root parent trait always be the same for the any of the parent IDs? For example, let's take the Suit Trait. It has 3 parent traits, and all of them have the topmost parent being the Clothes Trait. Will that always be the case, or could there be 2 different topmost parent traits?|
Also, does this same logic apply to tags as well?
|#238 by yorhel|
2020-11-18 at 05:14
|< report >There can be multiple roots for each tag & trait. Which, especially in the case of traits, kind of sucks. Right now it's somewhat random which root trait is being used as the "group".|
|#239 by micah686|
2020-11-19 at 04:04
|< report >So, given 2 child trait IDs, how do I determine the top parent?|
|#240 by proteuskun|
2020-12-08 at 09:25
|< report >Can someone tell me how do i use this api in python.DO i simply connect and listen and receive .After that i just login using json in same py file.sorry i have never used api with ports before only with keys.Is there a proper documentation with actual code :( can someone help i have no idea what i am doingLast modified on 2020-12-08 at 09:26|
|#241 by yorhel|
2020-12-08 at 14:57
|< report >You can either look for a Python sockets programming tutorial or use an existing vndb lib if you want it easy (like this one).|
|#242 by proteuskun|
2020-12-11 at 08:48
|< report >Thank You I will look into it|
|#243 by tracreed|
2021-02-08 at 16:56
|< report >Not sure where to mention this but on the docs for the api, section 2, Filter string syntax. I'm pretty sure there is a typo of Some to ome.|
|#244 by tracreed|
2021-02-08 at 18:49
|< report >Alright so I am a bit confused and unsure where to ask about this. I downloaded one of the dumps, specifically the tags dump assuming it would extract to a .json file. And indeed it did extract to a .json file. But it didn't open in anything I used. So I did some messing around with the files and it's another .gz file inside of the first one. That when extracted contains the actual .json file but with no extension. Is this the way it is supposed to work? Or am I missunderstanding how the .gz files work? I ran gzip -d <file> to extract the original file I downloaded.|
edit: I obviously don't understand .gz files very well. But after to talking to a friend adding another .gz to the downloaded file does make it act as it should. But what is the point of double compressing it?Last modified on 2021-02-08 at 19:24
|#245 by yorhel|
2021-02-08 at 19:43
|< report >Huh, it doesn't seem double compressed to me.|
This outputs JSON:
curl -Lq https://dl.vndb.org/dump/vndb-tags-latest.json.gz | zcat
This works too:
How are you downloading the file? Is gzip perhaps also used as HTTP content encoding? (the server shouldn't serve that file with gzip, but who knows)
EDIT: Ah, it does seem double-compressed when downloaded through Firefox. Weird.Last modified on 2021-02-08 at 19:47
|#246 by tracreed|
2021-02-08 at 19:47
|< report >I am just clicking the link and saving it as a file, using firefox.|
|#247 by tracreed|
2021-02-08 at 19:53
|< report >When I run the first command you used it works fine. But when I run this command|
wget -O - https://dl.vndb.org/dump/vndb-tags-latest.json.gz | gzip -c > test.json | zcatI once again get compressed data. Maybe it's a problem of gzip?
Edit: Nevermind this command is wrong.
What does happen is. When I run
wget -O - https://dl.vndb.org/dump/vndb-tags-latest.json.gz | gzip -c > test.jsonAnd then inspect the test.json file it is now the compressed file.Last modified on 2021-02-08 at 19:57
|#248 by yorhel|
2021-02-08 at 20:00
|< report >Piping to 'gzip -c' will compress the data, yes, that's expected. You'll need the -d option to decompress or call 'gunzip'.|
I've fixed the downloading through Firefox. Turns out the server did indeed use gzip content-encoding when requested. Not sure why Firefox doesn't decompress it before saving, as you'd expect with HTTP content encoding.
You must be logged in to reply to this thread.