Start with Postgres and scale later once you have a better idea of your access patterns. You will likely model your graph as entities and recursively walk the graph (most likely through your application).
If the goal is to maintain views over graphs and performance/scale matters, consider Feldera. We see folks use it for its ability to incrementally maintain recursive SQL views (disclaimer: I work there).
I currently run Firefox nightly with cross-site cookies disabled and all the trackers/scripts blocked. I also run uBlock Origin. Any idea if privacy badger is redundant with this set up?
Yeah that's been our biggest issue in this ecosystem (the non-JVM clients). They can't do writes and are often far behind on feature parity with the blessed JVM clients.
Not the one you asked, but here's what I would have told my 17 year old self.
* "Slope beats y-intercept." The best computer scientists and engineers I've ever worked with and/or mentored embodied this principle more than anything else.
* It can be tempting to over-optimize for short-term milestones (e.g., an important admissions exam, the next job or promotion), but there is a significant compounding value to knowledge accumulation and truly learning your craft well. Read and learn as much as you can, all the time, even if it isn't immediately necessary or useful.
USENIX and their conferences were the absolute best to publish with. You as a researcher focus on submitting papers and/or being part of the PC. They help organize the whole conference instead of depending on an army of volunteers (you won't see "general chairs" and "local chairs" unlike with ACM). And all papers were open access without even needing a login: you literally just click the PDF from the conference website.
Many USENIX papers are not open access, despite being available by literally just clicking the PDF from the conference website. (See the definition in https://openaccess.mpg.de/Berlin-Declaration.) This is not for any nefarious reason; a lot of them predate the general understanding of why open-access licensing was important, as well as Creative Commons's founding.
You'll note, for example, that https://www.usenix.org/legacy/publications/library/proceedin... bears no license of any kind, and the unfortunate fact is that under current copyright law is that random people redistributing copies of the paper is by default illegal.
Perhaps the first time you'd heard of it directly, but you used the term "open access" as if everyone were familiar with it, so you'd apparently been hearing about it indirectly for many years.
The impact of that declaration is that people today are talking about "open access" and moving to open-access publishing models, and new ACM articles are being published as CC-BY or CC-BY-NC-ND.
Most people don't have any idea what we're talking about; if you asked most people what Kosaraju's algorithm was, how an absorption refrigerator worked, or who the Four Hundred were in the Gilded Age, they also wouldn't have anything sensible to say. In https://link.springer.com/article/10.3758/s13428-012-0307-9 they found that less than half of college students in the USA knew the name of the capital of Iraq, which was currently full of US troops. And in https://youtu.be/ZjGd1F1Xk8w?t=18 you can see lots of random people who don't know what "WWW" stands for and think Asia is a country.
Talking about "open access" without knowing about the Berlin Declaration at the heart of that movement is the same kind of ignorance.
I am currently the publication chair of a ACM SIGCHI conference and actually all the work is managed by by Sheridan publishing for ACM. The process is really streamlined. The main paper track actually is now a journal since a few years, so it is mostly getting the flea circus of 30 workshops and other adjunct papers to meet their deadlines. We are still under the old syste, so I wonder what the effect of the new system will be as some universities prepay the fees, while others require the authors to do that per paper afaik.
I first went to sqlx thinking it would be like JOOQ for Rust, but that wasn't the case. It's a pretty low-level library and didn't really abstract away the underlying DBs much, not to mention issues with type conversions. We've since just used rust-postgres.
For anyone else, if you want to try out Feldera and IVM for feature-engineering (it gives you perfect offline-online parity), you can start here: https://docs.feldera.com/use_cases/fraud_detection/
reply