Admittedly, I try and stay away from database design whenever possible at work. (Everything database is legacy for us) But the way the term is being used here kinda makes me wonder, do modern sql databases have enough security features and permissions management systems in place that you could just directly expose your database to the world with a "guest" user that can only make incredibly specific queries?
Cut out the middle man, directly serve the query response to the package manager client.
(I do immediately see issues stemming from the fact that you cant leverage features like edge caching this way, but I'm not really asking if its a good solution, im more asking if its possible at all)
There are still no realistic ways to expose a hosted SQL solution to the public without really unhappy things occurring. It doesn't matter which vendor you pick.
Anything where you are opening a TCP connection to a hosted SQL server is a non-starter. You could hypothetically have so many read replicas that no one could blow anyone else up, but this would get to be very expensive at scale.
Something involving SQLite is probably the most viable option.
I personally think that this is the future, especially since such an architecture allows for E2E encryption of the entire database.
The protocol should just be a transaction layer for coordinating changes of opaque blobs.
All of the complexity lives on the client.
That makes a lot of sense for a package manager because it's something lots of people want to run, but no one really wants to host.
There's no need to have a publicly accessible database server, just put all the data in a single SQLite database and distribute that to clients. It's possible to do streaming updates by just zipping up a text file containing all the SQL commands and letting clients download that. Or even a more sophisticated option is eg Litestream.
I will stick with Firefox due to multi account containers. Chrome does not offer a comparable alternative, and this extension makes working with AWS significantly easier.
To me blink as a render engine is too closely coupled to Google. Even though technically chromium is disconnected and open source, the amount of leverage Google has is too high.
I dread the possibility that gecko and webkit browsers truly die out, and the single biggest name in web advertising has unilateral sway over the direction of web standards.
A good example of this is that through the exclusive leverage of Google, all blink based browsers are phasing out support for Manifest V2. A widely unpopular, forcing change. If I'm using a blink based browser I become vulnerable to any other profit motivated changes like that one.
Mozilla might be trying their hardest to do the same with this AI shlock, but if I have to choose between the trillion dollar market cap dictator of the internet and the little kid playing pretend evil billionaire in their sandbox? Well, Mozilla is definitely the less threatening of the two in that regard.
But notably not lack of competing browser engines as the power of these decisions come from the product and not the open source libraries the product uses.
It's not based on cryptocurrency, there are just extra features that use it. Unstoppable domains is an optional feature. You don't need to visit them, but it gives value to people by letting them actually own their domain instead of leasing it from ICANN. Viewing ads to earn BAT is an optional feature. As I mentioned ad blocking is built in so you can have it show no ads if you want.
Not OP, but personally I very much prefer Firefox font rendering on Windows. Text in Chromium based browser looks blurry to me, which causes eye strain. Firefox also has a much sharper and better looking image down-scaling algorithm that again looks blurry in Chromium based browsers.
Have you used chrome? The depth of enshittifaction is staggering. Setting it up from scratch is like watching a Cory Doctorow documentary.
The only change that’d get me to willingly use the engine would be the DOJ mandating the return of manifest v2 support and then barring google from contributing to it for the next 40 years.
This feels like a very indirect way of saying "yes the fourier transform of a signal is a breakdown of its component frequencies, but depending on the kind of signal you are trying to characterize for it might not be what you actually need."
Its not that unintuitive to imagine that if all of your signals are pulses, something like the wavelet transform might do a better job at giving you meaningful insights into a signal than the fourier transform might.
The thinking that sinus are basic building blocks and own frequencies is part of the problem. Fourier is a breakdown into frequencies of "sinus" waves. Sinus are fundamental in physics of some idealistic conditions, but using Sinus is just a choice, mathematically you could just as good use other bases. A triangle has mathematically the same right to own a frequency as a sinus.
Reality is often different from ideal and not that linear. So basic wave-forms often aren't really sinus. But people usually only know sinus, so they'll use this hammer on every nail. Some guys into electrical engineering maybe know about rectangles, but there's, not yet, enough deeper understanding out there for playing with the mathematical tools correctly.
Physics didn’t pick sinusoids because it “only knows about sinuses.” Physics picked them because the math forces them on you.
Actual engineers:
- use sinusoids because LTI systems respond, uh... linearly to them
- use square waves for digital logic
- use triangle waves for modulation
- use wavelets for compression or time/frequency localization
- use Hilbert transforms (and actually know what "orthonormal" means)
- use STFT, CWT, FFTs... and know exactly why Fourier works and when it breaks
Leaves me wondering if this will allow for superconducting cryogenic transistors? If my hobby level understanding of how silicon doping works, this new superconducting germanium would be a p-type? I could imagine something like ion implantation could be able to establish n-type regions within the germanium while allowing bulk regions of the lattice to maintain superconducting properties.
Though admittedly, I'm not actually aware what parts of a semiconductor circuit are the biggest power dissipation sources, so I guess its entirely possible that most of the power is dissipated across the p-n junctions themselves.
Yes, this would be P type. Boron is usually the P type dopant of choice. I’m not sure what role they have in mind for this, but probably to replace polysilicon and metals as conductors. What you have to watch out for is that this will make diodes wherever it bumps up against n-type material. This is a problem for metals as well, because you can get accidental schottky junctions, and we usually solve it with degenerate doping under the contract. I’m not sure what a junction with this material would do though.
I want to note that in what has become the largest (by mass) application of semiconductors, silicon PV cells, boron has been replaced by gallium as the P type dopant of choice. Boron suffers from an annoying form of light-induced efficiency degradation that gallium avoids.
This is rich coming from the company that scraped the entire internet and tons of pirated books and scientific papers to train their models.
Maybe if you didn't scrape every single site on the internet they wouldn't have a basis for their case that you've stolen all of their articles through training your models on them. If anyone is to blame for this its openAI, not the NYT.
I think the biggest thing that makes me distrust the news as it stands is that I feel like news reporting is far too prone to overly leveling debates. And by that I mean making both sides come across as equally credible, even when that could not be further from the truth.
The most common way I see it happen is like this:
you have a situation some group says that some totally safe thing is actually super dangerous.
There's a large body of scientific literature that really clearly shows its totally safe.
the news reports on it as such:
"While many within the scientific community state that there is no harm with X, anti-X proponents respond that the current studies are not substantial enough, and that they are simply asking questions."
This framing, does not point out that the anti-X proponents are just a group of 10 people, nor does it describe how much evidence there exists in the scientific literature showing the thing is safe. Both sides are made to sound equally reasonable, which in my mind is practically a lie by omission. Because they aren't equally reasonable.
Edit: One additional thought. I still will read news articles if they get shared to me, and I try to evaluate based off of what the source is. but another reason I don't actively keep a news subscription is because news orgs love reporting on tragedy. Because its more noteworthy. I'm just not interested in reading yet another article about how crime is on the rise. Or about the most recent fatal car crash. Etc.
I stare into the void enough as it is. I don't want another.
Strong agree. There seem to be a lot of people who think that being in the middle without taking a side is virtuous in itself, as if there's a law of equilibrium keeping both sides equally crazy or competent or corrupt at any time. In reality they're just getting dragged by the Overton window as other, richer people slide it around.
You're overestimating the number of people who trust scientific concensus. More people believe in creationism than evolution by purely natural selection. What's your alternative to treating both sides with equal weight without making the unscientific side feel disenfranchised?
News isn't supposed to be a democracy, it's supposed to be a hunt for truth and accuracy. If news is intentionally subjecting itself to systemic bias in the hopes of attracting people with firmly held patently absurd beliefs, they shouldn't cry "how did this happen" when people who financially supported them in the hunt for truth and accuracy stop doing so.
The news is democratic in the sense that news targeted towards the general public needs to be catered towards the general public, just like any other product. What good does framing an issue as one sided if nobody is going to watch it except people who agree with it already?
It sounds like we're talking about different products. I want to buy information, not reassurance. Maybe the purchasing power of people like me has gone down relative to the general population, but news organizations shouldn't be surprised then when people like me are less interested in the product they modified to be more palatable for a different target audience.
Its a long-standing criticism, where if the president suddenly announced that he thought the earth was flat, the New York Times headline the next day would be "Shape of the Earth; Views Differ".
Not to mention problems often have many causes and many possible solutions—even framing the reporting having merely two sides is crippling to news quality
ZeroGravitas says>" it's been trending down most places for decades."<
The problem is that reported crime is a political number: those in power want the number to go down on their watch and they have control of the reporting mechanism. Pressure exists at every level, from patrolmen filling out incident reports to statisticians collating the numbers for the mayor, to move the numbers downward.
There is every reason to be skeptical of reported criminal statistical trends in USA cities today.
The police have bosses, and the bosses want to have careers. From The Wire: S04E09.
Pryzbylewski: I don't get it. All this so we score higher on the state tests? If we're teaching the kids the test questions, what is it assessing in them?
Sampson: Nothing. It assesses us. The test scores go up, they can say the schools are improving. The scores stay down, they can't.
Pryzbylewski: Juking the stats.
Sampson: Excuse me?
Pryzbylewski: Making robberies into larcenies. Making rapes disappear. You juke the stats, and majors become colonels. I've been here before.
Sampson: Wherever you go, there you are.
It's unfortunate that the "crime is higher than reported" narrative has been hijacked by bootlickers who are chomping at the bit for a pretext to sic the military on large cities, but the underlying idea that crime statistics can be gamed for the sake of self-interest isn't wrong.
It seems nuts to claim the year over year national drops in crime are all gaming stats comp. I’m positive there is some of this going on, but an effort like that to suppress crime statistics on a national level doesn’t pass the sniff test, you’re saying no cops retired and then blow the whistle? That everywhere in America is doing this and nobody notices(enough)?
Similarly some cops are actual criminals and totally corrupt. Like Taglione, or the Baltimore gun trace task force, but that sure as hell doesn’t mean everyone is.
Do you think police are systematically underreporting crime to the tune of like multiple decades lows?
I hear this "crime is down" theme a lot. But I see with my own eyes, in my own neighborhood, that the opposite is happening. Other people do as well and that is a big reason why the news media is viewed negatively.
Crime is certainly down, assuming you live in a developed country that isn’t one of a couple massively regressing ones.
You are displaying a bias here. Unless there is a “itbeho’s neighborhood gazette” reporting exclusively on what you see through your front window and THEY’RE saying there is no increase in crime then your complaint doesn’t hold.
My grandfather, in the US, was waylaid by actual bandits operating openly on a major US highway in the 1970s.
In some cities you couldn’t take public transport without almost surely being victimized.
Crime is very much down and that’s especially true for large cities. Maybe you’ve become overly fixated on the topic or maybe your neighborhood is an outlier but that doesn’t make the statistics “wrong”.
One problem is that the most distrust toward "traditional media" is from people who completely trust to even more dishonest resources. It is not that traditional media would be perfect, but their faults are not the actual reason for the fall of trust.
Instead, it is well paid grifters for whom the issue with traditional media is that they do not lie enough.
I've heard this referred to as sanewashing. You really started noticing this with Trump. Compare what he literally says to what he's quoted as saying. He likes to rail against MSM, but man, they do a lot of heavy lifting, making it seem like what he says is remotely sane.
They often edit his speeches for brevity because they don't have time in a news report to post the full hour long ramble of tangents off tangents off tangents even doing that makes his speeches seem considerably more coherent than they actually are if you watch the whole thing.
You mean like the BBC? I mean that brevity edit removed 35 minutes between two sentences? This is the latest but not the only example media creating quotes that were never said.
Frankly, I’d greatly refer media to leave the speeches unedited. Media does not need to be crafting narratives. Whatever Trump is, if you are selectively editing his words or pulling quotes out context intentionally to create the appearance that he said something that he didn’t…you are worse than he is.
There are in fact two sided floppies! IIRC they behave a lot like the two sides of a cassette tape, the floppy reader only reads from one side at a time.
A fun fact in that regard: the game Karateka (an actual game for the Apple II) had an easter egg, where the team realized that their game entirely fit in the capacity of one side of a floppy, so they put a second copy of the game on the other side, but set up so that it would render upside-down.
I'd not be surprised if the inclusion of that detail in this post was directly inspired by Karateka.
The Apple II had a non-linear layout of video memory, so programmer Jordan Mechner used a layer of indirection where he had an array of pointers to rows of screen memory.
They realized that inverting the screen was as simple as inverting the row-pointer array. Then they managed to convince Broderbund to ship a double-sided floppy with that change in the software.
This is the kind of app I wouldn't believe could actually exist. Human rules are just so painfully complex and unwilling to agree with the concept of consistency.
Insanely impressive that it works even just well enough that more than just the developer finds use in it.
It makes perfect sense to me that nextcloud is a good fit for a small company.
My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.
Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.
I've used nextcloud for close to I think 8 years now as a replacement for google drive.
However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...
Been using syncthing with keepass(X/XC) for probably half a decade now and it works great, especially since KeepassXC has a great built-in merge feature for the rare cases that you get conflicts from modifying your vault on different clients before they sync.
The only major point of friction with syncthing is that you should designate one almost-always-on device as "introducer" for every single one of your devices, so that it will tell all your devices whenever it learns about a new device. Otherwise whenever you gain a device (or reinstall etc) then you have to go to N devices to add your new device there.
Oh, and you can't use syncthing to replicate things between two dirs on the same computer - which isn't a big deal for the keepass usecase and arguably is more of a rsync+cron task anyway but good to be aware of.
Been running NC on my home server and basically maybe update it once a year or so? Even less probably, so definitely not a must to update every time. Plus via snap it's pretty simple.
Cut out the middle man, directly serve the query response to the package manager client.
(I do immediately see issues stemming from the fact that you cant leverage features like edge caching this way, but I'm not really asking if its a good solution, im more asking if its possible at all)