Hacker Newsnew | past | comments | ask | show | jobs | submit | riceart's commentslogin

> Like I shouldn't be billed extra if my surgeon is hungover and things don't go smoothly.

That’s what you think is the common root cause of complications?

You clearly are not interested in a productive discussion.

As for a fee for unpredictable occurrences - that’s what insurance is.


If the amount to bill the patient's insurance is determined after the complication, it's not an unpredictable risk, it's a fact. If it can't be predicted, then it more or less exists for each procedure, but that isn't reflected in the cost, the patient that has the bad experience gets charged extra for it.


Insurance exists to cover losses from unexpected events. Medical complications are unexpected events.

Your hungover surgeon is a bullshit strawman - most complications have nothing to do with provider malice or incompetence. Again since that seems to be the angle you are starting with you clearly have no interest in a grown up discussion or too ignorant and also full of hubris to understand any of this (which fits in perfectly well on this site).

If you throw a massive clot after a surgery and stroke out who’s fault was that if all the standard protocols for clot prevention were followed. Maybe you’re a smoker (or not) and 5 years later that unknown cancer will finally declare itself.

> more or less exists for each procedure

This is extremely misleading as it does not exist in any meaningful level of risk across the entire patient population.


I think you're getting hung up on a joke about the hung-over doctor. The problem is that whatever the complication is, it should be amortized over all the deliveries of that treatment, not the one case where it happens to occur.


If they can predict the complication, they can bill for it ahead of time.

It is all insurance in the end, but the incentives created by pushing the providers to bill ahead are better than the ones created by letting them bill for what they do.


> The service seems to be by a company called Epic (www.epic.com), based in Wisconsin.

Lol, that mom and pop shop.

That’s the largest EMR vendor in North America and second in the world.

They’re the provider of the patient portal frontend amongst other things but there’s a lot more behind the scenes that must be done (with varying levels of quality) that is institution specific. Just using Epic does not make this work.


Exactly. IF you're a hospital system paying 10's of millions of dollars annually to Epic, and IF you want to pay for Epic's price estimation tool and offer it to consumers, then, yes, your patients can get a rough estimate of the cost of their elective surgery.

I grew up in Madison, Wisconsin. Most of my graduating high school class works for Epic. Their campus rivals Google's. Fun fact, they bought all the farm land you can see to the horizon and rented it back to the farmers for below market rates just to ensure the view from their campus was unobstructed by new development.

If you think anything about Epic is free or in the patient's best interest, check out their auditorium: https://cuningham.com/portfolio/epic-deep-space-auditorium You could host a Taylor Swift concert in there and have unused seating.


Putting aside the systemic critiques, I really dig the design of that building and its surroundings. That auditorium even puts Apple's to shame.


And apparently they have so many employees the auditorium isn’t big enough anymore so there’s some spillover for all staff meetings


> You could host a Taylor Swift concert in there and have unused seating.

Taylor Swift regularly sells out venues with capacity of 70k+ (football stadiums), it's doubtful she would even consider an 11k venue unless it was a private event.


Without cryptographic techniques that is all easily forged.


The point is forgery of signatures and contracts is not typically an issue.

In contract disputes, there's usually no dispute of if a contract was signed. Sometimes there's a dispute over which contract was signed, but then each party may have a signature on a contract or not. Much more often there's no disagreement on the contract or that it was signed, but on the terms.

It's nice that electronic signing can solve the issue of validity of signatures, but it's not that big of a deal, because it wasn't that much of an issue; and that's why e-signing has devolved into 'click a button to enter a signature' without any sort of cryptography.


Have you ever had to enter codes as a provider in Epic for instance? The UX is fucking terrible. Incentives are completely misaligned for accurate coding in academic institutions. Finally the ontology of ICD codes are trash on their face.


> X-rays confirmed a sprain in Gus's wrist

Eh, ok.

Yet more of the laziest cynical AI generated garbage. AI winter can’t come soon enough.

Seriously I see nothing here of interest to a healthcare professional.

> The ICD-10-CM coding system is a remarkable catalogue of every imaginable medical condition, injury, disease, and even the seemingly improbable events that can befall a human being.

The thing with ICD is while it has an odd array of oddly specific and absurd codes it misses sufficient detail for some very basic common diagnoses.


I think it's more about being entertaining, rather than truly educational to someone who is already a dr.


Right from the about page:

> Our clinical vignettes serve multiple purposes. They provide a distinctive, engaging learning tool for medical students and healthcare professionals looking to familiarize themselves with the ICD-10-CM system.


1994 was way way after dialup Internet access was mainstream (both Yahoo and Amazon were founded that year). Any first access in the state would be sometime in the 80s. By 1993 there were already national level dialup ISPs.

> that was about as long [15 minutes] as it took to load one webpage with one image.

Very hyperbolic. A simple webpage with text would load in seconds on a 28.8k modem. A single image would usually be a 10s of kB in those days, so maybe some seconds, not even a minute.


This is not correct for access open to the general public. The first commercial ISP in Utah, Xmission, was founded in 1993. Yes, many of us had internet access through the University of Utah before that (Pete Ashdown, the founder, had worked at Evans & Sutherland, which had quite good internet connectivity). But most people did not if they weren't associated with a university.

(I helped create the third public ISP in Utah (ArosNet), in 1995).

V34 (28.8k) was only ratified in 1994, and many ISPs were still at 14.4 at that time. Many customers still used much slower modems - 9600 remained quite common.

The commercial Internet really only started taking off in 1993. Not by coincidence, that was the same year NCSA Mosaic was released.


Wow, Utah really lagged on getting an ISP, huh?

From a UK perspective: my family got dual up in ‘94, there were lots of ISP options & it was basically impossible to buy anything slower than 28.8k new (at retail anyhow, I’m sure you could special order) as no-where stocked them. 28.8 took over fast.

I think the UK had lots pf ISPs at the time because without a local number to call it was VERY expensive rather than just kinda expensive. But that’s just a guess.


Population density helps a lot with internet access. Distance to COs is smaller, easier to wire, shorter backhaul, etc. The salt lake city metro area had reasonable density but, particularly 30 years ago, it wasn't like .. most places in the UK or Europe.


Commercial ISP. People had dialup internet access before there were commercial ISPs.


Yes, I’m aware. That’s what I was talking about


I was still fighting local telcos in Canada in 2003 to get filters removed so I could go above 9600 baud.


I moved to Canada in 2002 and my 56k modem worked just fine and I'm pretty sure (though I might be misremembering) cable and ADSL were already around (I think people were amused that I was still using dialup). I'm sure some other parts were behind.

I think I had Internet access around 1984 or so, lived near a university (this was not in the US so must have been some early international connection). Before that we had BITNET (IBM's network) and uucp. My first networking from home experience was with a 300bps modem to an IBM mainframe using a terminal program I wrote myself on a Sinclair ZX Spectrum. It was a pretty crappy half-duplex but pretty exciting as a kid.

A trip down memory lane...


This was St. Josephs Island. As soon as we had a little WiFi based ISP up and running suddenly broadband was available. Wonder why...


> This is not correct for access open to the general public. The first commercial ISP in Utah, Xmission, was founded in 1993. Yes, many of us had internet access through the University of Utah before that (Pete Ashdown, the founder, had worked at Evans & Sutherland, which had quite good internet connectivity).

> The commercial Internet really only started taking off in 1993.

1993 is before 1994.

I am very much scratching my head by how this contradicts anything I stated in a way that makes it not correct.

If a commercial ISP existed in 1993, then by 1994 plenty of regular people would have been getting internet access - without any special affiliation other than a credit card - ie mainstream. (Per your own comment “many had internet access before that”) - those affiliated were among the first to have internet access is a pretty reasonable interpretation and that was well before 1994 in all of the continental US.


It wasn't "way after" in Utah. Xmission turned on in October 1993. They grew a lot in 1994 but most people in the valley still did not have internet access yet. By 1996 the situation was very different - but 94 was still early in the Utah Internet days. The growth in Internet adoption was so rapid during those 3 years that the difference just between 94 and 95 was quite large.

And yes, many faculty, students, and staff at the U had access. But that was like 50,000 people in a metro area of a few million.


Ok not “way” after - really splitting hairs here. Point still stands public commercial dialup internet was available pretty much everywhere. Call them early adopters or whatever - but the internet already had established communities well before 1994.

> 50,000 people in a metro area of a few million.

We’ll just have to agree to disagree on the interpretation of what “first families”. In the context I read that post it sounded like someone saying they were among literally the first few, not 50k to 100k when anyone with a credit card could order service. First families in my interpretation would be those that probably had access from their parent’s university shell account. This follows with the claim 15 minutes to load a single webpage - but unless you were on a shitty rural phone line running 2400bps it’s not like everyone’s dialup internet access at the time was that limited. Some had to put up with that but the tech in 1994 was not that primitive.


50k had access if they wanted it. Most didn't use it. I'm confused why you don't believe me that internet penetration in the salt lake valley was very limited in 1994 - I was there, I ran an internet service provider, and prior to that, I co-ran the largest multi-line BBS in salt lake. I'd been doing dial up for a long time.

You may be assuming that your experience in a different location applies to Utah, but I think that you're really just shifted by a year. The GP almost certainly weren't actually one of the first families in the sense of dozens, but they could well have been first among people they knew in their area. 15 minutes is probably hyperbole.


> I'm confused why you don't believe me that internet penetration in the salt lake valley was very limited in 1994

I do believe you. Really have no dispute with any details you’re putting down.

I suppose the distinction I’m making is about the cohort of early adopters that had special (usually U access) from those using commercial ISPs or BBSes. That earliest cohort no matter how small it was a good bit earlier than 1994 and eternal September. I’ll grant my wording inadvertently exaggerating the penetration of availability in 1994, just saying the first households were probably getting dialup some years prior.

For me an EE prof managed get me a shell account in 1990 while in middle school. Even in rust belt US many friends just used AOL into 1994 and uptake of dialup ISPs was still slow, but that 1994 cohort was distinct.


My first dialup was to BBS's in the early 80s using a Novation AppleCat[1] and I think I was using The Source[2] around the same time, which eventually got swallowed by CompuServe[3]. To access The Source you dialed in to Telenet[4] first and then connected from there.

1. https://en.wikipedia.org/wiki/Novation_CAT#The_Apple-CAT_II

2. https://en.wikipedia.org/wiki/The_Source_(online_service)

3. https://en.wikipedia.org/wiki/CompuServe

4. https://en.wikipedia.org/wiki/Telenet

I think my first Internet access was through Prodigy[5]. The Wikipedia says this wouldn't have been till 1994[6], but I remember it being a few years earlier since by 1994 I would have been using AOL.

5. https://en.wikipedia.org/wiki/Prodigy_(online_service)

6. https://en.wikipedia.org/wiki/Prodigy_(online_service)#Conve...

My first significant Internet access was at U.F. in 1995. While at U.F. I was also the sole system administrator for a small ISP in Gainesville. Two PCs running Slackware, a Livingston PortMaster with a dozen Hayes modems attached and a T1 for uplink.

My family also piloted something called Viewtron in the early 80s:

https://en.wikipedia.org/wiki/Viewtron

AT&T marketing video for Viewtron:

https://youtu.be/sgYkpk9nJnE


I would say dial up Internet was far from mainstream in 94, even with was widely available.

I'd argue it wasn't until 96/97 when "everyone" started using it and membership didn't quite peak with services like AOL until 2001.

The internet was still the land of the nerds until the early 2000's


It's difficult to generalize. It definitely depends on your location, and especially population density. Before 1995, it was mostly nerds and early adopters. By 1995, in the north east US, dial up internet had definitely gone mainstream. Local ISPs had ads on the radio. A new one was popping up everyone couple of months. By late 1997, early '98, broadband services (@Home cable modems, DSL) were starting to roll out.

The Netscape IPO, in summer 1995, and also the release of Windows 95, really marks the "mainstream" period. Getting online with Trumpet Winsock and Windows 3.1 was a PITA.


> The internet was still the land of the nerds until the early 2000's

The first dot com boom, sort of the genesis of fortunes that make this site exist was prior to the early 2000s.

The early 2000s was the bust period.


Right, unless the backhaul was massively overcommitted... which was normal at the time because it was really difficult to sufficiently provision backhaul, even a few months in advance. The internet was certainly not fast in 1994.

Anyway, since when does the truth need to get in the way of a good story?


You assume 28.8, good phone lines, and a responsive server. Sometimes it was an old 9600 because it was all you could easily get, noisy lines, and the server on the other end being slow because the picture was popular. Then a page could load for a minute, with pictures and all.

Not 15 minutes though.


Dialup services were available. Most had little to no internet connectivity. The few services dedicated to internet access were not mainstream yet. MS was preparing to deploy MSN 1.0 with no internet because that was just a hippie fad.


I remember having dialup in 1994 through xmission. It was the same year my dad installed Slackware on our basement computer.


Eh, I think the person you’re replying to would assert that Usenet thrived prior to 1993.

We don’t need a lesson about eternal September either.


> In embedded work, you don't get extra credit for being faster than necessary.

You absolutely do when you can cut power requirements and get by with cheaper CPU/hardware. I ran a whole consulting business redesigning poorly designed devices and redoing firmware for cost reduction. How does one decide “necessary”, what is necessary in the short and long term are often not the same.


Let’s be real here this system is flawed at its core though. This can’t even handle the load of some moderately popular PHP forums with simple deployment or even this site nothing took decades to harden. And this is due to gross flaws in architecture, IMNSHO.

Comparing it to Reddit is silly. And yes I’m sure there are more than a few here that can honestly say their servers could handle this load - it isn’t much.


Care to point out the core flaws?

As far as I'm concerned there aren't any core issues. There are a lot of queries to a single DB which is one the same server, but there's nothing preventing you from running the DB on another server and sharding it if need be. I wouldn't call that a core architectural issue because the way you set up your DB is not core to the architecture.


It’s a federated system without any of the affordances that would make it usable (or particularly interesting to me). Literally load balancing by going to a long list of alternate instances. The sites own documentation just says, find one that works, and if that shits the bed find another. This is a UX agreeable to a very tiny and idiosyncratic group (as evidenced by Mastodon once the Elon is evil hype died.

Yes of course in theory you can beef up any instance, and shard, and add a caching and queueing layer while you’re at it. And tada you’ve just redid Reddit.


Sharding, caching and queuing doesn't break federation. That's not a core flaw.

As the ecosystem grows, a shortlist of popular, robust, federated instances will crop up for people who just don't want to bother.

Again, if you have any core flaws, I'm all ears.


> Sharding, caching and queuing doesn't break federation. That's not a core flaw.

I didn’t say it did, but it doesn’t enhance it either.

> As the ecosystem grows, a shortlist of popular, robust, federated instances will crop up

How short is a shortlist? When does that mean just new Reddit?

I suppose I would be more enthused if at the least the basic design eased standing up high traffic (or let’s be honest even mild traffic) instances. The performance story right now is: it’s written in rust - which is not nothing but it would be more interesting if supporting even a moderate amount of traffic on minimal hardware was an architectural priority. The flaw as I see it is that this is just not a design goal.

https://github.com/LemmyNet/lemmy/issues/2877

https://github.com/LemmyNet/lemmy/issues/2910

You can argue that things can be improved but initially well engineered systems help to enhance initial mindshare.


Even just two-three large and popular instances is enough to make a Reddit impossible, so that's how short it can get.

The performance story isn't just that the backend is written in Rust. It's also that the front is very lightweight (80kB), that the architecture is horizontally scalable, etc...

I don't see in any of those links a core flaw, a problem in the initial engineering that makes it impossible for the architecture to scale. All I see are some poorly written queries, for which the main devs made a root cause analysis and described how it can be fixed. Is that evidence that the inherent design isn't capable of adequate performance? No, it isn't. It's evidence that there are some small performance gotchas that can be fixed easily, and this is normal for an ambitious project.

But even as it is, yes, it can handle mild traffic. lemmy.ml runs on potato hardware, hexbear.net is an example of an instance which has ~3'000 comments a day, which is roughly the scale of this website, and it runs fine on a single dedicated server.


I dunno maybe I’m wrong about the technical stuff. My minor point is that this software has some technical flaws today - forums and link aggregators are solved problems over and over again so implementation excellence is at least a unique value proposition.

I’ll quote the top-voted thread here

>> Nobody wants a federated, slow, difficult to use version of reddit. Nobody wants to choose a server.

> I want this. I want this because it's a sustainable way to have Reddit without the ads. The bad UX is an acceptable tradeoff for a platform that doesn't go to shit.

I’m just not buying how a federated system of isolated instances solves this. What fundamentally prevents the dominant oligopoly or monopoly server(s) from just being Reddit running on Lemmy? Lemmy doesn’t dictate how things are run - so why won’t a major funded instance just evolve to a new Reddit? How does Lemmy decisively get you to a Reddit without the ads? What stops a major Lemmy instance going to shit?

Just having federation as an opt in feature doesn’t force the system to evolve in a particular way.

If a “Voat” equivalent pops up it’s not like the dominant instances are going to federate with it.


> I’m just not buying how a federated system of isolated instances solves this. What fundamentally prevents the dominant oligopoly or monopoly server(s) from just being Reddit running on Lemmy? Lemmy doesn’t dictate how things are run - so why won’t a major fu.nded instance just evolve to a new Reddit? How does Lemmy decisively get you to a Reddit without the ads? What stops a major Lemmy instance going to shit?

A federated system of isolated instances doesn't solve it, because that's not a federated system. The point of federation is that the instances aren't isolated, and they're highly interchangeable. No major instance has the incentive to become a new Reddit, because it's so easy to switch instances at every level that they just don't have the moat to make that happen. It's exactly the same thing as an email provider going to shit, but even less problematic.

If a Voat instance pops up, sure, dominant instances won't federate with it. That's perfectly fine. It doesn't mean that the inter-federated dominant instances can get away with pulling a Reddit.

Federation is an opt-in feature in theory, but in practice, what's the value proposition for an instance to turn off federation today?

It's really quite simple. Without federation, there is an incentive to pull a Reddit. With federation, there is no longer an incentive to do so, because you don't have a moat. What do you think would happen if Reddit only had 1/2 of the subreddits anyone used and if you could keep access to all the same communities on a competitor?


> With federation, there is no longer an incentive to do so, because you don't have a moat

Domination is orthogonal to a technical federation feature. Once there is enough imbalance you defederate and that’s that. There’s nothing that inherently prevents gross imbalance from forming and the natural forces favoring centralization - such as funding one beefy instance - still apply.

> value proposition for an instance to turn off federation today?

Maybe not today, but it would be the same as any historical netsplit.

> What do you think would happen if Reddit only had 1/2 of the subreddits anyone used and if you could keep access to all the same communities on a competitor?

I think that puts them still in a fucking dominant position. And why automatically assume competitor vs cabal?

Anyway good luck with your project.

I’ve been using the internet since IRC and Usenet - both federated in their own way - both completely marginal.


There actually is something preventing gross imbalance from forming. Slightly different choices in federation for large instances with regards to extremists are a big one, and since power-users tend to be central to the content and won't move instances from the beginning too easily, there is strong inertia against communities forming exclusively on one instance.

IRC was never federated. Usenet was, and it did die out - but so is email and it's working far longer than it had any right to.


> IRC was never federated.

What? What do you think the term “relay” in IRC means? The jargon term netsplit used even for newer federated networks (even used in these comments elsewhere) comes from IRC. There is literally an entire network named after a defederation event.

Now we sort of take for granted that IRC is basically a closed federated system - but the original design of the network was one dominant set of relays - EFnet is a direct descendant of this network after all - if anything it’s just a specific example of politics and network evolution. There are technical reasons as well - but at the time of the early splits of the 90s (EFnet, Undernet) it was not primarily technical problems.

Anyone involved in a fediverse I think would do well to learn some lessons from IRC even if their system is technically superior.

> but so is email

As a federated system barely - go try to stand up an email server on your home network or VPS and see how well that works. It’s run by a cabal of large providers.

https://news.ycombinator.com/item?id=30224478


No. In games AI is a jargon term for the behavior of an NPC. It has a long history in this use. When people talk about a game’s AI it is often clear what is being discussed regardless of the specific technology used. It is therefore useful for communicating an idea and that’s usually all that really matters.

There’s even a distinct Wikipedia article on this use: https://en.wikipedia.org/wiki/Artificial_intelligence_in_vid...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: