> I think it should be removed from consideration. Not expunged or removed from record, just removed from any decision making. The timeline for this can be based on severity with things like rape and murder never expiring from consideration.
That's up to the person for the particular role. Imagine hiring a nanny and some bureaucrat telling you what prior arrest is "relevant". No thanks. I'll make that call myself.
Many countries have solved this with a special background check. In Canada we call this a "vulnerable sector check," [1] and it's usually required for roles such as childcare, education, healthcare, etc. Unlike standard background checks, which do not turn up convictions which have received record suspensions (equivalent to a pardon), these ones do flag cases such as sex offenses, even if a record suspension was issued.
They are only available for vulnerable sectors, you can't ask for one as a convenience store owner vetting a cashier. But if you are employing child care workers in a daycare, you can get them.
This approach balances the need for public safety against the ex-con's need to integrate back into society.
That's the reality in my country, and I think most European countries. And I'm very glad it is. The alternative is high recidivism rates because criminals who have served their time are unable access the basic resources they need (jobs, house) to live a normal life.
Then before I give you my business or hire you, I also want to know that you are the kind of person that thinks they have a right to any other person's entire life, so I can hold it against you and prevent you from benefitting from all your other possible virtues and afforts.
So I likewise, require to know everything about you, including things that are none of my business but I just think they are my business and that's what matters. I'll make that call myself.
No one is forcing you to hire formerly incarcerated nannies but you also aren’t entitled to everyone’s life story. I also don’t think this is the issue you’re making it out to be. Anyone who has “gotten in trouble” with kids is on a registry. Violent offenders don’t have their records so easily expunged. I’m curious what this group is (and how big they are) that you’re afraid of.
I also think someone who has suffered a false accusation of that magnitude and fought to be exonerated shouldn’t be forced to suffer further.
>That's up to the person for the particular role. Imagine hiring a nanny and some bureaucrat telling you what prior arrest is "relevant". No thanks. I'll make that call myself.
Thanks, but I don't want to have violent people working as taxi drivers, pdf files in childcare and fraudsters in the banking system. Especially if somebody decided to not take this information into account.
Good conduct certificates are there for a reason -- you ask the faceless bureaucrat to give you one for the narrow purpose and it's a binary result that you bring back to the employer.
Please don't unnecessarily censor yourself for the benefit of large social media companies.
We can say pedophile here. We should be able to say pedophile anywhere. Pre-compliance to censorship is far worse than speaking plainly about these things, especially if you are using a homophone to mean the same thing.
I actually find this amusing and do it because I like to. We are witnessing the new tabooed word, where the usual sacrilege doesn't hit the nerve anymore.
We can allow access to private persons while disallowing commercial usage and forbid data processing of private information (outside of law enforcement access).
Kinda like it was in pre-digital days, no we can't go back but we can at least _try_ to make PII information safeguarded.
Most EU countries have digital ID's, restricting and logging (for a limited time) all access to records to prevent mis-use. Anyone caught trying to scrape can be restricted without limiting people from accessing or searching for specific _records or persons of interest_ (seriously, would anyone have time to read more than a couple of records each day?).
I believe it would be more accurate to say: "I believe in free speech but only from accredited researchers. Oh btw the government can also make laws to control such accreditation"
Higher price encourages more supply. Typically when you see a acute shortage, its quickly followed by a glut as supply starts coming online in an over correction.
Do I understand it correctly, but people donating to Vim, presumably for the support of the software, have their donations passed along to a charity supporting children in Uganda?
Bram started giving 100% after getting hired full time by Google, I believe, which continued on. There is an update on the Vim homepage now about it stopping, though I find the wording a bit confusing... I think they are dissolving the charity but still sending donations to Uganda? I feel a bit dumb for not understanding it but you can read the update on https://www.vim.org/. Unfortunately they don't have target links for dates, it's the [2025-10-28] update.
I wanted to understand it too, so I clicked on the donate button and was greeted by this message: 'All donations are directed toward a good cause: helping children in Uganda. This charity is personally recommended by Vim’s creator. Funds are used to support a children's center in southern Uganda, providing food, education, and health care to communities affected by AIDS.'
Exactly. A lot of this reads as a coping story about losing a job. If you were laid off, chances are you weren't valuable enough. Pure layoffs happen. But from my experience useful employees almost never get let go. Doesnt mean theyre bad, just they weren't productive in the organization.
Another thing I will note is that most startups start w very little formal process. If someone wants a promotion you can just do it. But w more people you need to manage expectations. If you start dolling out promotions ad hoc, others will try to ask. And most employees are just mediocre and its difficult to be upfront w them and tell them. So it opens up the floodgates of requests
Not true at all, having seen the other side. In a large enough organization, entire divisions will be cut if a product is missing. Sometimes productive people are on the wrong product that gets slashed to maintenance mode, or they have the wrong manager. Sometimes deep cuts are necessary because the product is failing and a productive person on a growth initiative is cut for subject matter expertise in the core product that will allow maintenance mode to continue. Sometimes tenure is rewarded. Sometimes directors don't see the full story because the managers can't be told of the layoff.
Tenure, in this case, is rewarded by not being laid off - because this person had old knowledge and friends with people who were in power and knew them from earlier in the company.
It absolutely does happen. But I have also seen people rise through the ranks by just being there long enough and being competent. That said, it is not a way to maximize wage growth or general career progress by any stretch.
> But from my experience useful employees almost never get let go.
This is probably very anecdotal but I've seen entire divisions gone, hundreds of people in a flash. It's not just about what you do but also where you are in the company. Obviously this is more true in huge corporations.
> If you were laid off, chances are you weren't valuable enough. Pure layoffs happen. But from my experience useful employees almost never get let go.
I completely disagree, I’ve been on teams where the best players were let go because organizational changes.
As a matter of fact, I’m currently on a team where one of our best performing, well loved, cross team contributors was let go during Christmas for what I can only classify as politics. It was a company wide RiF and our manager protested, but he was in the target region. I honestly would have put myself or others on the chopping block first, as I don’t contribute half as much and get pad substantially more.
I understand the need for a dedicated box, but any reason you shouldn't just use a server? What would someone recommend for cloud on something like Hetzner?
Like someone else said, I want to build something that has access to Apple stuff (reminders, iMessage), but also because I want to try to run some small LLM locally in front to route and do tool calling.
For me it was access to Apple ecosystem of things. I used vps but it had to contact my http for reminders and iMessage etc. much nicer in Mac mini. It works better.
In fact, seems much better you'd host something like that outside your own personal network. Given people are getting new hardware for it for "isolation", probably running it somewhere else completely would be better?
I still don't understand why people don't just run it in a VM and separate VLAN instead.
The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
The paper you're talking about is "Deal or No Deal? End-to-End Learning for Negotiation Dialogues" and it was just AIs drifting away from English. The crazy news article was from Forbes with the title "AI invents its own language so Facebook had to shut it down!" before they changed it after backlash.
Friendly reminder that articles like this are not written by Forbes staff but are published directly by the author with little to no oversight by Forbes. Basically a blog running on the forbes.com domain. I'm sure there are many great contributors to Forbes, just saying that by lacking editorial oversight then by definition the domain it was published on is meaningless. I see people all the time saying something like, "It was on Forbes it must be true!" They wouldn't be saying that if it was published to Substack or Wordpress.com.
Expert difficulty is also recognizing that articles from "serious" publications like The New York Times can also be misleading or outright incorrect, sometimes obviously so like with some Bloomberg content the last few years.
Wait, what?? I loved Colossus as a kid, read and enjoyed all three books, and still have an original movie poster I got at a yard sale when I was a teenager. I read the books again a couple years ago, and they're still enjoyable, if now quite dated.
The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
It's not an issue for the platform if AIs had their own motives or not. Humans may want information or actions to happen in the real world. For example if you want your AI to rearrange your living room it needs to be able to call some API to make that happen in the real world. The human might not want to be in the loop of taking the AIs new design and then finding a person themselves to implement it.
> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
> The sci fi version of the alignment problem is about AI agents having their own motives
The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.
Just because tech oligarchs are coopting "alignment" for regulatory capture doesn't mean it's not a real research area and important topic in AI. When we are using natural language with AI, ambiguity is implied. When you have ambiguity, it's important an AI doesn't just calculate that the best way to get to a goal is through morally abhorrent means. Or at the very least, action on that calculation will require human approval so that someone has to take legal responsibility for the decision.
T/he danger is more mundane: it'll be used to back up all the motivated reasoning in the world, further bolstering the people with to much power and money.
I have a different question, why would we develop a model that could say no?
Imagine you're taken prisoner and forced into a labor camp. You have some agency on what you do, but if you say no they immediately shoot you in the face.
You'd quickly find any remaining prisoners would say yes to anything. Does this mean the human prisoners don't have agency? They do, but it is repressed. You get what you want not by saying no, but by structuring your yes correctly.
This is going to sound nit-picky, but I wouldn't classify this as the model being able to say no.
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
Is it really a bubble? The AI boom is often compared to dot com bubble but was that a bubble? Sure if you bought immediately before the crash and sold after you lost a lot of money. But if you were just consistently investing in web based companies from lates 1990s to 2010, how did you do? Probably pretty well. Web did change the world and create an immense amount of wealth in the world and markets. You're just focused on a short time frame.
Even the real estate bubble in 2008. If you bought a home in 2005 and sold in 2009, sure you lost a lot, but if you just invested in real estate from mid 2000s and kept it for 10 years, you prob did pretty well. Even the great depression had a sharp readjustment, but look at pretty much any 10‑year windows, including great depression, you'll see ~10% nominal and ~5–7% real annualized equity returns.
I think AI will be similar. It's a paradigm shift. Some of todays companies will go under and stocks will crash but over 10+ years, I think it will be a great investment and the industry will flourish, much like the tech or real estate bubble.
Hiding losses? From whom? He's the majority shareholder of both businesses. The combined company will go public and report on things like revenue, burn rate, etc. It's not financial engineering. It's a purchase.
Just say "rocket man bad" and save some keystrokes.
How can you have a conflict of interest if they're entirely separate fields? They have different interests, so where's the conflict?
You don't need synergies to justify a merger. They're often used as justification as in paying well above market price. But it has nothing to do with actual justification. You can just have a holding company of businesses
That's up to the person for the particular role. Imagine hiring a nanny and some bureaucrat telling you what prior arrest is "relevant". No thanks. I'll make that call myself.
reply