Hacker Newsnew | past | comments | ask | show | jobs | submit | halayli's commentslogin

no offense but it looks like the reason behind oauth confusion is the author. I had to read half way through to get to a definition which was a poor explanation. Sometimes certain topics are difficult to understand because the initial person behind it wasn't good at communicating the information.

What about using claude -p as an api interface?

This is the definition of reasoning motivated fallacy. You want to believe what you want to believe.

My naive take is we discovered it as a math tool first but later on rediscovered it in nature when we discovered the electromagnetic field.

The electromagnetic field is naturally a single complex valued object(Riemann/Silberstein F = E + i cB), and of course Maxwell's equations collapse into a single equation for this complex field. The symmetry group of electromagnetism and more specifically, the duality rotation between E and B is U(1), which is also the unit circle in the complex plane.


Maybe I missed it but I don't see them defining what they mean by ethics. Ethics/morals are subjective and changes dynamically over time. Companies have no business trying to define what is ethical and what isn't due to conflict of interest. The elephant in the room is not being addressed here.


Especially as most AI safety concerns are essentially political, and uncensored LLMs exist anyway for people who want to do crazy stuff like having a go at building their own nuclear submarine or rewriting their git history with emoji only commit messages.

For corporate safety it makes sense that models resist saying silly things, but it's okay for that to be a superficial layer that power users can prompt their way around.


Ah the classic Silicon Valley "as long as someone could disagree, don't bother us with regulation, it's hard".


Often abbreviated to simply "Regulation is hard." Or "Security is hard"


Your water supply definitely wants ethical companies.


Ethics are all well and good but I would prefer to have quantified limits for water quality with strict enforcement and heavy penalties for violations.


Of course. But while the lawmakers hash out the details it's good to have companies that err on the safe side rather than the "get rich quick" side.

Formal restrains and regulations are obviously the correct mechanism, but no world is perfect, so whether we like it or not ourselves and the companies we work for are ultimately responsible for the decisions we make and the harms we cause.

De-emphasizing ethics does little more than give large companies cover to do bad things (often with already great impunity and power) while the law struggles to catch up. I honestly don't see the point in suggesting ethics is somehow not important. It doesn't make any sense to me (more directed at gp than parent here)


Is it ethical for a water company to shutoff water to a poor immigrant family because of non-payment? Depending on the AI's political and DEI-bend, you're going to get totally different answers. Having people judge an AI's response is also going to be influenced by the evaluator's personal bias.


I note in the UK that it is illegal for water companies to cut off anyone for non-payment, even if they're an Undesirable. This is because humans require water.

How useful/effective would a business AI be if it always plays by that view?

Humans require food, I can't pay, DoorDash AI should provide a steak and lobster dinner for me regardless of payment.

Take it even further: the so-called Right to Compute Act in Montana supports "the notion of a fundamental right to own and make use of technological tools, including computational resources". Is Amazon's customer service AI ethically (and even legally) bound to give Montana residents unlimited EC2 compute?

A system of ethics has to draw a line somewhere when it comes to making a decision that "hurts" someone, because nothing is infinite.

Asan aside, what recourse do water companies in the UK have for non-payment? Is it just a convoluted civil lawsuit/debt process? That seems so ripe for abuse.


Civil recovery, yes. It's not like you don't know where the customer lives.

Doesn't seem to be a problem for the water companies, which are weird regulated monopolies that really ought to be taken back under taxpayer control. Scottish Water is nationalized and paid through the council tax bill.


> Humans require food, I can't pay, DoorDash AI should provide a steak and lobster dinner for me regardless of payment.

Bad example.

That humans require water, doesn't force water companies to supply Svalbarði Polar Iceberg Water: https://svalbardi.com


Ok, do we have to give them McDonald's?

Raw gruel and a vitamin pill: https://en.wikipedia.org/wiki/Gruel

Or whatever's cheapest for your local food supply. Every time I've done this game with supermarket produce, it comes under £1/day to support someone's nutritional requirements, currency tells you where I played that game.

McD is pretty expensive these days, I've seen cheaper even in the caregory of fast food.


I'd love to see a return to the idea of government cheese, or at least align food stamps with WIC (WIC in US is a specific food aid program restricted to ostensibly healthier foods), instead of allowing the ridiculous moral hazard and waste posed by regular foodstamps.

I was thinking more about externalities, e.g. some company dumping chemical pollutants into a nearby water system, and not water companies themselves.

I understand the point you’re making but I think there’s a real danger of that logic enabling the shrugging of shoulders in the face of immoral behavior.

It’s notable that, no matter exactly where you draw the line on morality, different AI agents perform very differently.


I am not sure the author knows what spacial frequency means. Taking a well defined measurement unit and using it as an expression feels pretencious.


I think parent comment was pointing to lack of establishing a causation link. The finding in their abstract is extrapolated by statistical inference. For example smokers tend to drink more etc. The paper does take such factors into account. Personally I wouldn't jump to such a strong conclusion from statistical inference because it closes the door on other factors that might be even stronger when combined together. The paper reflects motivated reasoning more than a discovery outcome. That said, smoking is of course a major health risk, I am just pointing at the research approach.


Smoking is a major health issue, but it's barely a driver of midlife mortality. Smoking tends to kill you later.


I feel libffi is better suited and supports most platforms.


Tsoding made a video about it : https://www.youtube.com/watch?v=0o8Ex8mXigU


The ffcall library even more


wasn't aware of ffcall, it's definitely a better fit


your question leaks your intentions and drives the LLM to confirm your cognitive bias. it treats your intentions as conclusion. Try to form your questions in a way that allow LLM to arrive to the word/concept of "suppression" in a more neutral probabilistic manner when the context hints to such instead of giving it the words you want to hear. Otherwise you're just falling into confirmation bias.


on a similar note, there are also these field attributes that are very helpful for catching similar issues:

https://clang.llvm.org/docs/AttributeReference.html#counted-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: