Hacker Newsnew | past | comments | ask | show | jobs | submit | p_ing's commentslogin

> the top comments are invariably hilarious,

Sadly that is all that reddit is, now. Have a serious question? Expect multiple top replies to be some sort of [un]funny joke answer.

It's a wasteland and devalues the platform when everyone competes for Internet Points.

/r/aviation is just one example of being full of this crap.

Oddly enough, I don't see it as much in gaming subreddits, even the more generic ones.


reddit lacks consistent moderation and the worst is location based subreddits, where all dissenting takes are effectively hidden.

Yet one can imagine a limited set of filters that could in theory fix this:

    - eliminate obvious bots
    - eliminate low content / metoo / naysaying
    - eliminate memes
    - detect and promote high quality controversial posts equally to unilaterally upvoted ones

And perhaps let subreddits conditionally opt in or out of each of ^, but have to declare which. We know at least half of ^ is easy, and now LLMs open new doors to potentially new automations, but its likely not cost effect yet.

still i suspect the largest barrier is merely that all the popular social media sites are actively captured by ad-driven development / leaders. That cant last forever, people are sick of it.


> still i suspect the largest barrier is merely that all the popular social media sites are actively captured by ad-driven development / leaders. That cant last forever, people are sick of it.

This is why it's a good idea to make the switch to federated alternatives like Lemmy/Piefed. The more people who do this the more people will see it as a viable alternative, making it easier to get away from the ad-driven model of social media.


Retvrn to oldschool forums with chronological posting.

> I get ads both for dick pills and boob surgery

There is some percentage of the world-wide population that would find interest in both ads simultaneously.


While true (you're not the first to suggest it, even), in the context of the other things they show, I think it is more likely to be an example of them not knowing which advertiser to pitch my eyeballs at, and less likely to be them identifying me as a member of this set.

Openclaw is a security nightmare. Not exactly the shining beacon of examples to follow.

i work in AI security at meta so i get the concern, but a lot of these risks are overblown if you know what youre doing. i installed openclaw on a fresh laptop with no sensitive data, no saved credentials, no access to production systems. theres no way it can exfil data or do any harm in that setup

That uh doesn't detract from it's security issues. And arguably makes it less useful as a product.

"If I put this virus on a virtual machine with no network access, it's safe".

Sure.


Does your employer know about your promotion of inauthentic content on a social network site not controlled by your employer?

yes, meta approved the side project through the normal OSS process. its a free open-source tool, theres nothing to promote inauthentically. and theres nothing inauthentic about it, the advice the agent gives is pulled directly from my experience and codebase. 30+ people have thanked it for help that saved them real time and money

Users don’t understand CLI nor want to manage the systems to run CLI.

MCP provides users with an easy to use and convenient method to access data.


this is exactly what llms unlock though

users dont need to understand cli flags or read man pages anymore the model does that part it translates natural language into the right commands stitches tools together handles errors retries etc.

the cli becomes the execution layer not the user interface

mcp makes sense if youre building a polished end user product but if the agent is already sitting on a machine the llm is the friendly interface to the cli. That's literally what it's good at contextualizing: intent-mapping it to commands and adapting when things change.


When using a multi-tenant SaaS LLM with strict private networking requirements, MCP is a great way to expose data/services.

But like you said, if you want something unpolished/hobby level, go for CLI.


Of course it’s possible. There are multiple decentralized and fragmented solutions. This is a concept from the early 2010s.

Storj and Sia are just two examples.


Thanks a lot for comparing my solution to Storj like infrastructures and You’re right the concept of decentralized/fragmented storage has roots in the early 2010s with Storj and Sia

But I meant to say deduplication and network with safety was seemed as impossible by many so I made that statement

By the way, We didn't require complex node setups or crypto-tokens like them we work above base providers like Dropbox and providing well hardened safety that anyone could use without prior crypto-knowledge with streaming and parallel downloading as base

We have created a Consumer version of file manager with integrated mesh(all cloud providers mixed) with options like creating shareable pools so on total your college notes sits in everywhere all users cloud provider spot with safety and you can only feel the speed of parallel downloading from different providers when you try it out

Once again thanks for your comment for being constructive ,Feel free to express your knowledge I need it myself to Grow to make usable and stable versions to every level of peoples Our goal is in file deduplication be Semantic compression where files within get deduplicated or indexed for collective mesh yet keeping privacy as base

And we did this all on free you can use your cloud storage more safer than themselves and as a 17-year-old self-taught dev, I'm eager for your technical insights to help us stabilize this for the masses.


Typically you want stability and predictability in a server. A platform that has a long support lifecycle is often more attractive than one with a short lifecycle.

If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.

Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.


Yeah, and that's the take I assumed to hear based on what was said.

However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.

I'd take a similar stance for devices that are built in a homelab for running LLMs.


Depends on what you're building an ARM system for. There are proper ARM servers out there; server work isn't the exclusive domain of x86, after all.

For homelabs, that's out the window. Do whatever you want/fits your needs best. This isn't the place where you'd likely find highly available networks, clustered or highly available services, UPS with battery banks, et. al.


This [0] may provide a hint. Heimdal was developed outside of the US and not subject to export restrictions, unlike MIT. So perhaps in the beginning it wasn’t the package of choice to begin with.

And this [1] says for interoperability reasons.

[0] https://docs-archive.freebsd.org/doc/11.1-RELEASE/usr/local/...

[1] https://freebsdfoundation.org/project/import-mit-kerberos-in...


I don't think that has anything to do with FreeBSD's choice of MIT Kerberos or Heimdal.

Well, except the FreeBSD Foundation explicitly says MIT was chosen for interoperability.

Are you disputing the FreeBSD Foundation document?


Er, sorry, I meant the whole thing about Heimdal being non-U.S. based.

Dang, your failure modes certainly are extreme. What companies actually performed a from-scratch rebuild because they failed to take a backup or thought "today's thursday, it's too complicated to restore!"?

If an "OS upgrade" nukes your directory, that means you're running a single DC. The question is... why would you do that?


Did the one MSFT employee that “reviewed” it know of this image? If not, it doesn’t matter how many people “on the Internet” recognized this image.

I’ll never understand the implied projection.

(I don’t think this was reviewed closely if at all)


I would hope that the person who reviews their training on gitflow, knows something about gitflow. And if you know something about gitflow, it's not that strange to expect to recognise the most iconic gitflow diagram.

But even if you don't recognise the original, at least you should be able to tell that the generated copy is bullshit.


Again, I don't think this was reviewed. It was an assignment to a vendor 'write document and I'll hit publish'. There's a great chance the MSFT document _owner_ has no experience in the relevant area.

You’re incorrect on how the publishing process works. If a vendor wrote the document, it has a single repo owner (all those docs are in github) that would need to sign off on a PR. There isn’t multiple layers or really any friction to get content on learn.msft.

I suggested that if there is no review process, it is a systemic issue, and that if there is a review process that failed to catch something this egregious, it is a systemic issue. My supposition is that regardless of how the publishing process works, there is a systemic failure here, and I made no claims as to how it actually works, so I'm not sure where the "you're incorrect on how it works" is coming from.

You said it takes multiple people screwing up, implying that publishing content had multiple gates/reviewers.

It doesn’t.


But if there are no gates, doesn't that mean the people who should have put the gates in there screwed up?

There have been no Gates at Microsoft for a long time.

There is no singular publishing org at MSFT. Each product publishes its own docs, generally following a style guide. But the doc process is up to the doc owner(s).

That seems to further make the case that it's a systemic problem.

The organization would have more guardrails in place if it prioritized "don't break things" over "move fast".


I think you're barking up the wrong tree here.

What?

This is how it works. There are too many people here like the op that make assumptions on what the process is/should be.


My dog does this thing where she picks a stick and gets you to pull on it, and she will pull on her end, too. She gets very focused on it. Pulling on the stick is the most important thing to her in that moment, when in fact it's just a stick she chose to turn into this tug of war.

That's not entirely unlike what you're doing here. You latched onto a misunderstanding of OP's intent, and by making a thing out of it got people to pull back, and now you also keep tugging on your end.

Except she does it on purpose and enjoys it, while I think you did it inadvertently and you do not seem that happy. But then, you're not a dog, of course.

You could stop pulling on the stick. I do enjoy these doggy similes, though. :)


This is a perfect description. I've probably been the dog at some point.

p_ing, see my nearby comment about what we mean by "multiple". Does that comment make any false "assumptions"? Or, is it you who are mistaken, persistently failing to understand what your interlocutors are saying?


It can be hard to resist.

There is no such thing as "making an assumption" on what a process "should be". I am asserting what it should be. A multi-trillion dollar company should absolutely have a robust review process in place. If one single person can submit plagiarised and defective material onto an official platform that implicates the company as a whole in copyright infringement, management has failed, ergo multiple people have failed, ergo the failure is systemic.

It is extremely well-known that individual humans make mistakes. Therefore, any well-functioning system has guards in place to catch mistakes, such that it takes multiple people making mistakes for an individual mistake to cascade to system failure. A system that does not have these guards in place at all, and allows one individual's failure to immediately become a system failure, is a bad system, and management staff who implement bad systems are as responsible for their failure as the individual who made the mistake. Let us be grateful that you do not work in an engineering or aviation capacity, given the great lengths you are going to defend the "correctness" of a bad system.


I've seen better review processes in hobby projects

Neither deadlines nor cheap work for hire help any sort of review process, while an hobby project is normally done by someone who cares.

This is correct. It just takes one person to review it and you’re good to go.

There’s also a service that rates your grammar/clarity and you have to be above a certain score.


I'll quote the relevant part of the parent post:

> that is in itself a failure of the system

... and add some Beer flavor: POSIWID (the purpose of a system is what it does)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: