Hacker Newsnew | past | comments | ask | show | jobs | submit | nilirl's commentslogin

Beautiful UI.

What's the criteria for pieces promoted?


Same here. I'm really curious about this too. What do they mean by "wonderful"? I suppose some of the pieces here might not be very well-known or popular, but they are inspiring, or maybe they are a good resource for learning something. All I can see is they are maybe associated with a read-later app?


At a quick glance, the main criteria seems for it to be libertarian and anti-socialism.


not quite. it's random selections every visit to the site.


Wondering about that too. What are the "wonderfullness" criteria?

For example, the first article I clicked on: https://juliagalef.com/2017/08/23/unpopular-ideas-about-soci...

has some pretty vile stuff:

> Non-offending pedophiles should be more widely accepted by society. It’s unfair to ostracize someone for a desire they were born with, and integrating them into society makes them less likely to cause harm.


Whats vile about that?


Amorality.

There's no evidence that anyone is born with particular sexual deviations. It attempts to simultaneously absolve and normalize attitudes that ideate rape of children, so long as they don't act on it. That's a pretty thin and permeable line to draw.


were you born with an attraction to women or is there no evidence to support it?


The truth


Why is it implicit that semantic search will outperform lexical search?

Back in 2023 when I compared semantic search to lexical search (tantivy; BM25), I found the search results to be marginally different.

Even if semantic search has slightly more recall, does the problem of context warrant this multi-component, homebrew search engine approach?

By what important measure does it outperform a lexical search engine? Is the engineering time worth it?


It depends on how you test it. I recently found that the way devs test it differs radically from how users actually use it. When we first built our RAG, it showed promising results (around 90% recall on large knowledge bases). However, when the first actual users tried it, it could barely answer anything (closer to 30%). It turned out we relied on exact keywords too much when testing it: we knew the test knowledge base, so we formulated our questions in a way that helped the RAG find what we expected it to find. Real users don't know the exact terminology used in the articles. We had to rethink the whole thing. Lexical search is certainly not enough. Sure, you can run an agent on top of it, but that blows up latency - users aren't happy when they have to wait more than a couple of seconds.


This is the gap that kills most AI features. Devs test with queries they already know the answer to. Users come in with vague questions using completely different words. I learned to test by asking my kids to use my app - they phrase things in ways I would never predict.


Ironically, pitting a LLM (ideally a completely different model) up against what you're testing, letting it write human "out of the ordinary" queries to use as test cases tend to work well too, if you don't have kids you can use as a free workforce :)


I build a system to do exactly this: https://docs.kiln.tech/docs/evaluations/evaluate-rag-accurac...

Basically it:

- iterates over your docs to find knowledge specific to the content

- generates hundreds of pairs of [synthetic query, correct answer]

- evaluates different RAG configurations for recall


How did you end up changing it? Creating new evals to measure the actual user experience seems easy enough, how did that inform your stack?


Totally depends on use case.

It solves some types of issues lexical search never will. For example if a user searches "Close account", but the article is named "Deleting Your Profile".

But lexical solves issues semantic never will. Searching an invoice DB for "Initech" with semantic search is near useless.

Pick a system that can do both, including a hybrid mode, then evaluate if the complexity is worth it for you.


Depends on how important keyword matching vs something more ambiguous is to your app. In Wanderfugl there’s a bunch of queries where semantic search can find an important chunk that lacks a high bm25 score. The good news is you can get all the benefits of bm25 and semantic with a hybrid ranking. The answer isn’t one or the other.


The benefit I see is you can have queries like "conversations between two scientists".

Its very dependent on use case imo


Can you have a coding philosophy that ignores the time or cost taken to design and write code? Or a coding philosophy that doesn't factor in uncertainty and change?

If you're risking money and time, can you really justify this?

- 'writing code that works in all situations'

- 'commitment to zero technical debt'

- 'design for performance early'

As a whole, this is not just idealist, it's privileged.


I would argue that these:

- 'commitment to zero technical debt'

- 'design for performance early'

Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.

There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.

Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.


Your argument hinges on getting the design right, upfront. That assumes uncertainty is low or non-existent.

Time spent, monetary cost, and uncertainty, are all practical concerns.

An engineering problem where you can ignore time spent, monetary cost, and uncertainty, is a privileged position. A very small number of engineering problems can have an engineering philosophy that makes no mention of these factors.


Their product is a high-throughput database for financial transactions, so they might have different design requirements than the things you work on.


Yes, my point is it's a privilege to have an engineering philosophy that doesn't need to address time spent, monetary cost, or uncertainty.


It’s the equivalent of someone running on a platform where there would be world peace and no hunger.

That’s great and all as an ideal but realistically impossible so if you don’t have anything more substantial to offer then you aren’t really worth taking seriously.


You forgot “get it right first time” which goes against the basic startup mode of being early to the market or die. For some companies, trying to get it right the first time may make sense but that can easily lead to never shipping anything.


This post does a great job of limiting its terribleness by being short.

I read it twice; twice I got nothing.

It is incoherent. Vaguely attempting to take a swing at ... modularity???


So: The author wants to work for a company with resources.

Unfortunately, details take time and time takes money.

For a business's survival, the company's relative positioning in the market, access to sales and marketing channels, financing are much stronger concerns.

This is a designer longing for endless tinkering.


What diction is "appropriate to the subject matter" is a negotiation between author and reader.

I think the author is ok with it being inappropriate for many; it's clearly written for those who enjoy math or CS.


I think it's a trick. It seems to be the article is just a series of ad-hoc assumptions and hypotheses without any support. The language aims to hide this, and makes you think about the language instead of its contents. Which is logically unsound: In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.


> In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.

I would refuse to even engage with the piece on this level, since it lends credibility to the idea that the creative process is even remotely related to or analogous to gradient descent.


I wouldn't jump to call it a trick, but I agree, the author sacrificed too much clarity in a try for efficiency.

The author set up an interesting analogy but failed to explore where it breaks down or how all the relationships work in the model.

My inference about the author's meaning was such: In a sharp peak, searching for useful moves is harder because you have fewer acceptable options as you approach the peak.


Fewer absolute or relative? If you scale down your search space... This only makes some kind of sense if your step size is fixed. While I agree with another poster that a reduction of a creative process to gradient descent is not wise, the article also misses the point what makes such a gradient descent hard -- it's not sharp peaks, it's the flat area around them -- and the presence of local minima.


I see your point. I'd meant relatively fewer progressive options compared to an absolute and unchanging number of total options.

But that's not what the author's analogy would imply.

Still, I think you're saying the author is deducing the creative process as a kind of gradient descent, whereas my reading was the author was trying to abductively explore an analogy.


True, but my point is that not only does the analogy not work, the author also doesn't understand the thing he makes the analogy with, or at least explores the thought so shoddily that it makes no sense.

It's somewhat like saying cars are faster than motorbikes because they have more wheels-- it's like with horses and humans, horses have four legs and because of that are faster than humans with two legs. It's wrong on both sides of the analogy.


I enjoy maths and CS and I could barely understand a word of it. It seems to me rather to have been written to give the impression of being inappropriate for many, as a stand-in for actually expressing anything with any intellectual weight.


I liked the post but can someone explain how macro choices change the acceptance volume?

Is it their effect on the total number of available choices?

Does picking E minor somehow give you fewer options than C major (I'm not a musician)?


I'm personally a little frustrated with these music theory answers. Trust me folks, these answers are nearly impossible for a non-musician to understand (and even as a musician it's a bit impenetrable).

E minor gives you the exact same amount of options as C major. The options are just shuffled around a little bit. You literally get the same amount of notes in either, just a slightly different set. It isn't any more complex. Listeners aren't going to notice a difference, except one will probably sound happy and one will probably sound sad/angry. The "acceptance volume", to use the blog author's term, isn't any different.

At best, it can change things a little bit for some instruments. For example, with a vocalist, their voice can only go so high. They might be able to hit up to a high C, but not even higher up to an high E. If you're in C major, that's great, the vocalist's highest note (C) is the 'home note' which sounds great (playing a C in C Major makes the song sound like it's 'finished'). If you're E minor, the 'home note' is E, and as mentioned they wouldn't be able to hit that note. So you wouldn't really be able to 'finish' on a high note.

Ultimately, I doubt the author is a musician. It was a strange example to make their point.


No you have an equal number of options (minor and major are effectively transpositions/rotations...e.g. the chord progressions are "m dim M m m M M" for minor (m-minor, M-major, dim-diminished) chord progression, vs "M m m M M m dim" for major).

The post is likely getting to the point that, for english-speaking/western audiences at least, you are more likely to find songs written in C major, and thus they are more familiar and 'safer'. You _can_ write great songs in Em, but it's just a little less common, so maybe requires more work to 'fit into tastes'.

edit: changed 'our' to english/western audiences


> Does picking E minor somehow give you fewer options than C major (I'm not a musician)?

Short answer: No. No matter what note you start on you have exactly the same set of options.

Long answer: No. All scales (in the system of temperament used in the vast majority of music) are symmetrical groups of transpositions of certain fundamental scales.[1] These work very much like a cyclic group if you have done algebra. In the example you chose, E minor is the "relative minor" of G Major, meaning that if you play an E Aeolean mode it contains all the same notes as G Major), and G major gives you the exact same options as C Major or any other Major Scale. What Messiaen noticed is that there are grouped sets of "Modes of limited transposition" which all work this way. So the major scale (and its “modes”, meaning the scales with the same key signature of sharps or flats but starting on each degree of the major scale) can be transposed exactly 11 times without repeating. There are 3 other scales that have this property (Normally these are called the harmonic minor, melodic minor and melodic major[2]). There are also modes of limited transposition with only 1 transposition (the chromatic scale), 2 (the whole-tone scale), 3 (the "diminished scale") and so on. Messiaen explains them all in that text if you're interested.

[1] This theory was first written out in full in Messiaen's "The technique of my musical language" but is usually taught as either "Late Romantic" or "Jazz" Harmony depending on where you study https://monoskop.org/images/5/50/Messiaen_Olivier_The_Techni...

[2] If you do "classical" harmony, your college may teach you the minor scales wrong with a descending version that is just a mode of the major scale. You may also not have been taught melodic major but it's awesome. (By “wrong” here, I mean specifically Messiaen and Schoenberg would say it’s wrong because a scale is a key signature/tonal area and so can’t have different notes when a melody ascending from descending. If there are two sets of different notes, Messiaen would say they are two scales and I would agree.)


It...depends.

If you're working in a continuous environment rather than discrete (choirs and strings can fudge notes up or down a bit, but pianos are stuck with however they're tuned), you'll often find yourself wanting to produce harmonies at perfect whole-number ratios -- e.g., for a perfect fourth (the gap between the first and second notes in "here comes the bride") you want a ratio of 4:3 in the frequencies of the two notes, and for a major third (the gap between the first and second notes in "oh when the saints, go marching in...") you want a ratio of 5:4. Those small, integer ratios sound pleasing to our ears.

Those ratios aren't scale-invariant though when you move up the scale. Here's a truncated table:

Unison (assume to be C as the key we're working in): 1

Major Second (D): 9/8

Major Third (E): 4/3

However, E is also a major second above D, so in the key of D for a "justly tuned" instrument, you would want the ratio D/E to also be 9/8. Let's look at that table though: (4/3)/(9/8) is 32/27 -- 5.3% too big (too "sharp").

When tuning something like a piano then where you can't change the frequency of E based on which key you're playing in, you have to make some sort of compromise. A common compromise is "equal temperament." To achieve scale invariance in any key you need an exponential function describing the frequencies, and the usual one we choose is based on 2^(1/12) since an octave having exactly twice the fundamental frequency is super important and there are 12 gaps in normal western music as you move up the scale from the fundamental frequency to its octave.

Doing so makes some intervals sound "worse" (different anyway, but it makes direct translations hard) than they would in, e.g., a choir. A major third, for example, is 0.8% sharp, and a perfect fourth is 0.1% sharp in that tuning system.

Answering your question, at first glance you would expect the scale invariance to therefore not limit your choices. Every key is identical, by design.

That's not quite right though for a number of reasons:

1. True equal temperament is only sometimes used, even for instruments like pianos. A tuner might choose a "stretched" tuning (slightly sharpening high notes and flattening low notes) or some other compromise to make most music empirically sound better. As soon as you deviate from a strict exponential scale, you actually live in a world where the choice of key matters. It's not a huge effect, but it exists.

2. Even with true equal temperament or in a purely vocal exercise or something, there are other issues. Real-world strings, vocal folds, etc aren't spherical cows in a frictionless vacuum. A baritone voice doesn't sound different just because their voice is lower, but because of a different timbre. When you choose a different key, you'll be moving the pitch of the song up or down a bit, exercising different vocal regions for singers, requiring different vocal types, or otherwise interacting with those real-life deviations from over-simplified physics. Even for something purely mechanical like piano strings, there's a noticeable difference in how notes resonate or what overtones you expect or whatnot. Changing the key changes (a little) which of those you'll hear.

3. Related to (2), our ears also aren't uniform across the frequency spectrum, and even if they were our interpretations of sounds also depends on sounds we've heard before, leading to additional sources of variation in the "experience" of a slightly lower or slightly higher key.


Minor edit: The "5.3%" I wrote out seemed too large and was bothering me all day. The culprit is that a Major Third is 5/4, not 4/3, leading to the D/E ratio being 1.2% too small rather than 5.3% too large. Apologies.


This was weak.

The author's main counter-argument: We have control in the development and progress of AI; we shouldn't rule out positive outcomes.

The author's ending argument: We're going to build it anyway, so some of us should try and build it to be good.

The argument in this post was a) not very clear, b) not greatly supported and c) a little unfocused.

Would it persuade someone whose mind is made up that AGI will destroy our world? I think not.


> a) not very clear, b) not greatly supported and c) a little unfocused.

Incidentally this was why I could never get into LessWrong.


The longer the augment, the more time and energy it takes to poke holes in it.


Instead of asking people why they're asking you "to be more empathetic", maybe you should ask yourself why you're prompting such a request?


This is addressed in the post:

> "Usually, 'be more empathetic' was a veiled request for me to modify my behavior or thinking towards someone (e.g. they thought I was rude to someone and wanted me to apologize and change my behavior)."

And you actually illustrate the entire point of the post.

Imagine that someone's upset with you and tells you to be more "gropulent". Many people have said this to you, but gropulence doesn't come naturally to you, and the term is bandied about in a wide variety of situations, making it hard to pick up context clues. There are people who call themselves "gropules" who can't explain how they use gropulence to support the claims that they make about others, and they sound an awful lot like psychics, who we all know are frauds.

How would you start learning to be gropulent?

I hope you'd be as curious and thorough as the author of TFA.


'be more empathetic' is not a veiled request, it is openly declared. The author subverts the ask for behavior change by a) calling it veiled and b) not treating it as the main argument against which they're trying to make a point.

Like the author, you're constructing a similar straw man argument: selecting a specific use of the word and making that the main point to argue against.

'be more empathetic' is an argument to behave differently with the people around you. Not think differently; behave differently.


Can you tell me which specific use of the word "empathy" I am arguing against? Because I don't think I'm arguing against any definition at all.

I think I'm arguing that telling someone to behave a way that they don't understand is unhelpful. That could be "empathetic", "thankful", or "gropulent". ** Taking the author at their word, they don't understand the request.

When they ask for clarification, they don't receive it.

In that way, it is veiled, similarly to my "gropulent" analogy.

In other words, the author is being asked to behave differently, but not given guidance on how to behave differently. Which is why they wrote this piece about what empathy is.

I think the author would have gotten a lot further by asking how rather than why, but the author admits that they thought that requests to be empathetic were requests "to be fake and lie". (i.e. They misunderstood what "empathy" meant.)


I was referring to this:

> who can't explain how they use gropulence to support the claims that they make about others, and they sound an awful lot like psychics, who we all know are frauds

You invoked usage that connotes vapid meaning.

I see your argument: How can someone do something if they don't know what to do?

Explicit instruction is useful to a novice; say a toddler or someone new to a domain. But most adults don't spend the day explicitly telling each other how to behave socially.

A case can be made for individuals who display some difficulty learning this vicariously, but considering that should affect <1% of the world population, I think it's reasonable to be suspect of misbehavior.

i.e "you didn't tell me how to be nice, so how could I be nice?" is not a reasonable excuse for most adults.


Ah, thank you.

My sentence about "gropules" was a dig at "empaths". You may not have run into them, but they're tarot and crystal adjacent. It's a teeny tiny minority of people who talk about "empathy", but they do use it vapidly, and could lead to people with an underdeveloped sense of empathy to dismiss the whole concept as something akin to new-age nonsense. I've met enough to last me a while though.

However, you're right about my argument, and our disagreement lies in the affected population.

"17% of children aged 3–17 years were diagnosed with a developmental disability, as reported by parents, during a study period of 2009–2017. These included ASD, attention-deficit/hyperactivity disorder (ADHD), blindness, and cerebral palsy, among others."[1]

Now, that's children, and with the correct support, people with those disabilities can be taught empathy along with other social skills.

But what about those without support, those who should have been diagnosed and supported but weren't, and those who didn't meet the criteria to be diagnosed with an official disability/disorder but still struggle?

I may be an outlier, but I would estimate that about 1 in 5 people that I meet have some trouble with empathy (i.e. semi-frequently misunderstanding the feelings and/or motivations of others).

Ultimately, I agree, ignorance is no excuse for bad behavior, but that doesn't mean we shouldn't act empathetically to those who have trouble doing so. If anything, the author's piece is a fantastic path to developing a sense of empathy for the <1-20% of adults affected by a lack of it.

1: https://www.cdc.gov/autism/data-research/index.html


"Do I lack a theory of mind for others?"

Or call it empathy, whatever.


That's a tautology and already answered in TFA. You're prompting the request because you are not being sufficiently obedient.


That's setting up a straw man.

By framing an argument against semantics or social obedience, you're ignoring self-implicating behavior; you're intentionally ignoring people's needs.

Why not ask "What am I doing wrong?" instead of "Hmm ... what is the nature of empathy? How may a linguist view the word? What is it's function? Ah! Is there an interesting generalization I can find here? Wow, let us dig deeper, this is no time to consider how I treat other people."


>"Hmm ... what is the nature of empathy? How may a linguist view the word? What is it's function? Ah! Is there an interesting generalization I can find here? Wow, let us dig deeper"

The funny part is that if you care about a semantic argument I know you will care about how you treat other people, too.

It's the person who strongly insists to not discuss what words mean, who 9 times out of 10 turns out to be dangerous.


I think that's a good point. I think the author says a lot of good things about empathy in the article, the nature of which goes deeper than "how may a linguist view the word?", but if they're being prompted with "be more empathetic" often, perhaps their behaviour really should be modified.


>Why not ask "What am I doing wrong?"

Because that's the first step to getting your needs ignored. And in conversations where "empathy" is brought up you're usually exactly one step from getting your needs ignored.

If you're doing something wrong, and I decide to do something about it, minimum decency dictates I make sure you understand what it is. Bringing into it some vague abstract notion that everyone with half a critical mind turns out to have a whole personal exegesis about, but I only know about it from everyone else? Now there's a clear signal that nobody in the room actually cares how people are treated.


> Because that's the first step to getting your needs ignored. And in conversations where "empathy" is brought up you're usually exactly one step from getting your needs ignored.

When someone's being asked to have more empathy, they're probably in the middle of ignoring someone else's needs right then.


>When someone's being asked to have more empathy, they're probably in the middle of ignoring someone else's needs right then.

Yes, correct. That's exactly how abusers expect us to think things work.

The difference is whether the person asking you to be more X cares if you have agreed-upon criteria of what that even looks like.

Given how many people ostensibly driven by positive motivations seem to be rubbed the wrong way by "empathy", well...


Ah, I see, so when someone yells at me "Please don't stab me", it's actually, I, who is at risk of being stabbed.


Please don't be like that. I don't condone you trying to stab anyone, but if you do anyway - I think it's quite obvious how that'll put you at a higher risk of getting stabbed yourself. Empathy!


So, did the Covid 19 pandemic force multiple insurance companies into insolvency?

Also, what does new product development look like for industries like this? How does one search for new financial products? Is it possible for a non-expert to come up with new products in this space?

Are there any books you can recommend for a novice?


While Covid 19 was certainly a "Catastrophe", the market for pandemic insurance in 2020 was minuscule and to my knowledge did not cause any insurer solvency issues. There were a few cases of insurers being instructed to pay out significant claims by the courts on Business Interruption losses which the insurers argued were not covered due to existing policy exclusion.

Product development is usually highly specialized as there are a lot of nuances and frictions within the insurance industry that outsiders may not fully understand. It helps also to be in the industry and have the network with insurers, reinsurers, brokers, etc. This is not at all to suggest there isn't room for clever people to bring innovation to the market though!

Book recs are hard to specify for CAT bonds but for insurance in general:

Against the Gods: The Remarkable Story of Risk - Peter L. Bernstein

The Black Swan - Nassim Nicholas Taleb

On the Brink: How a Crisis Transformed Lloyd's of London - Andrew Duguid

The last one is a personal favourite of mine


In the end there's always someone left holding the can. Lloyd's of London has underwriters with unlimited liability. Incredibly, a lot of these Names have historically been private individuals. In the 90s a lot of these lost their shirts (and their homes) when they dicovered that it wasn;t just an easy source of passive income. https://www.theguardian.com/money/2000/nov/04/business.perso...


The impact was nuanced, and depended on the specific policies in place.

Some businesses had business interruption insurance which paid out. Many policies exclude highly correlated events such as pandemics.

And then think about specific events which were cancelled, which may have bought policies protecting them if cancelled.

And of course life insurance and health care would have been affected.

SwissRe often produces public reports in the space if of interest:

https://www.swissre.com/risk-knowledge/building-societal-res...


I don't think so. I'm pretty sure it was considered an 'act of G-d', not an act of China :)

I think you could also have specific pandemic insurance, and that paid out, but those were rare before Covid.


Why would it? I don't think that much pandemic insurance is written and obviously you model all the contracts as being very highly correlated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: