Hacker Newsnew | past | comments | ask | show | jobs | submit | o11c's commentslogin

The hope for semantic HTML died the day they said "stop using <i>, use <em>", regardless of what the actual purpose of the italics was (it's usually not emphasis).


Who said that? The semantics are different.

The <i> HTML element represents a range of text that is set off from the normal text for some reason, such as idiomatic text, technical terms, taxonomical designations, among others. Historically, these have been presented using italicized type, which is the original source of the <i> naming of this element.

The <em> element is for words that have a stressed emphasis compared to surrounding text, which is often limited to a word or words of a sentence and affects the meaning of the sentence itself.

Typically this element is displayed in italic type. However, it should not be used to apply italic styling; use the CSS font-style property for that purpose. Use the <cite> element to mark the title of a work (book, play, song, etc.). Use the <i> element to mark text that is in an alternate tone or mood, which covers many common situations for italics such as scientific names or words in other languages.


> Who said that?

Unfortunately, a lot of people who missed the point entirely.

(We can, however, still disagree with the commenter that this "killed" semantic HTML. Fond of overstating things a bit?)


I'd take this a step further and say that the design flaws that motivated Perl6 were what really killed Perl. Perl6 just accelerated the timeline.

I do imagine a saner migration could've been done - for example, declaring that regexes must not start with a non-escaped space and division must be surrounded by space, to fix one of the parsing problems - with the usual `use` incremental migration.


Hmm, the is `type` the static type or the dynamic type, for languages where both apply?


It’s a very confusing question. How can an evaluated expression’s result have multiple types?


I don't know what GP meant, but I can give an example of how I understand it (disregarding the spurious "the" in his comment): in a type system with sub-typing, your expression could be statically typed to return a certain type, and at runtime return any of its sub-types.

For instance, in typescript you could ascribe an expression with the type Any, yet the actual runtime type will not be Any, it will be any concrete type.

In object oriented languages, you may define a hierarchy of classes, say with "Car" and "Truck" being sub-classes of "Vehicle", each having a concrete implementation, and have an expression returning either a "Car" or a "Truck", or maybe even a "Vehicle". This expression will be statically typed as returning a "Vehicle", yet the dynamic type of the returned value will not necessary be (exactly) that.


Any statically-typed language has this. In pseudo-Java:

  class Parent {};
  class Child extends Parent {};
  class Wrapper
  {
      Parent foo;
  }

  w = new Wrapper();
  w.foo = new Child();

  evaluate w.foo;
  // static type: Parent
  // dynamic type: Child


Could be a Raku construct:

  my $day-of-week = 5 but "Friday";


The author of the article is claiming it extends beyond ads.

That does not appear to be what the court actually said, however.

And I 100% believe that all advertisements should require review by a documented human before posting, so that someone can be held accountable. In the absence of this it is perfectly acceptable to hold the entire organization liable.


The ruling is about an advertisement, but:

> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.

So site operators probably need to assume it doesn’t just apply to ads if they have legal exposure in the EU.


At a glance, that looks like worse than merely the negligence of using a new technology.

The whole point of 3D printing is that the material is moldable when hot but rigid when it cools. And people really should be aware that engines get hot.


I think there's some nuance missing here. "Hot" is a scale, not just a true/false check.


>The whole point of 3D printing is that the material is moldable when hot but rigid when it cools.

Which means what exactly? Aluminum will go soft under high temperatures as well, yet this part would not have failed if it was made out of aluminum.

The failure is not the material, the failure is someone neglecting the operating conditions or material properties when choosing materials.

This exact part could have also been milled out of some plastic and would have failed the same way. The method to produce that part is only relevant in so far it is open to more people.


Looks like the part was advertised as ABS-CF, but may have actually been PLA-CF, which makes a big difference.

There are plenty of even higher temperature materials that can be 3d-printed. PAHT-CF is fine at fairly high temperatures (the nozzle temperature needs to be over 260C), and SLS printers can print things like aluminum.


Apparently they thought it's ok because the published glass transition temp is higher than the epoxy used for fiberglass construction


Bought it at a get-together.

Like gunshows, it’s a magnet for bad ideas.


I think the main issue is that many filament manufacturers mislead or outright lie about their filament capabilities.


Related: doctors will refuse to test you to see if what you're suffering from is a particular condition unless that condition actually has a known treatment.


We test for plenty of incurable diseases.


Those are not unrelated. Both from my family and from looking at the research, there's a strong correlation between long/difficult births (sometimes explicitly hypoxia) and autism.


Would you mind pointing me at the research you found? I've been looking for studies that correlated hypoxia and autism (and related interventions that might help) but I haven't been successful.


Not that long ago (in the last decade) I spoke to a researcher working to identify autism in the womb. Seems odd thing to chase if it’s caused by birth difficulties.


This is missing a lot of context.

What integer patterns does it do well on, and what patterns does it do poorly on?

How many strategies does it support? It only mentions delta which is not compression. Huffman, RLE, variable-length encoding ...

Does it really just "give up" at C/1024 compression if your input is a gigabyte of zeros?


Working on improving and clarifying this!

It only does delta and bitpacking now.

It should do fairly well for a bunch of zeroes because it does bitpacking.

I’m working on adding rle/ffor and also clarifying the strategy and making it flexible to modify the format internally so it won’t break API


For the "all zeros" case, my concern is that you said you're forcing a reset every 1024 words. This implies that if you have N kilowords of zero data, then it takes N times as much space as a single kiloword of data.

Good compression algorithms effectively use the same storage for highly-redundant data (not limited to all zeros or even all the same single word, though all zeros can sometimes be a bit smaller), whether it's 1 kiloword or 1 gigaword (there might be a couple bytes difference since they need to specify a longer variable-size integer).

And this does not require giving up on random-access if you care about that - you can just separately include an "extent table" (works for large regular repeats - you will have to detect this anyway for other compression strategies, which normally give up on random-access), or (for small repeats only) use strides, or ...

For reference, BTRFS uses 128KiB chunks for its compression to support mmap and seeking. Of course, the caller should make sure to keep decompressed chunks in cache.


Makes sense. For rle and dictionary encodings I probably won’t use the 1024 block size to split the input.

1024 for block size is just for being able to vectorize delta encoding and bit packing.

I am using this library for compressing individual pages of columns in a file format so the page size will be determined there.

I’m not using fastlanes to do in-memory compressed arrays like it is originally intended for. But I’ll export the fastlanes API in next version too, so someone can implement it themselves if needed


Better option: just wrap it in a unique struct.

There are perhaps only 3 numbers: 0, 1, and lots. A fair argument might be made that 2 also exists, but for anything higher, you need to think about your abstraction.



Nice article, never seen that.

I’ve always thought it’s good practice for a system to declare its limits upfront. That feels more honest than promising ”infinity” but then failing to scale in practice. Prematurely designing for infinity can also cause over-engineering—like using quicksort on an array of four elements.

Scale isn’t a binary choice between “off” and “infinity.” It’s a continuum we navigate with small, deliberate, and often painful steps—not a single, massive, upfront investment.

That said, I agree the ZOI is a valuable guideline for abstraction, though less so for implementation.


There's a reason I prefer "lots" over "infinity".

For your "quicksort of 4 elements" example, I would note that the algorithm doesn't care - it still works - and the choice of when to switch to insertion sort is a mere matter of tuning thresholds.


The zero-one-infinity rule is not applicable to the number of bytes in Poly1305 nonces and ChaCha20 keys. They are exceptions.


Compiler speed matters. I will confess to not as much practical knowledge of -O3, but -O2 is usually reasonable fast to compile.

For cases where -O2 is too slow to compile, dropping a single nasty TU down to -O1 is often beneficial. -O0 is usually not useful - while faster for tiny TUs, -O1 is still pretty fast for them, and for anything larger, the increased binary size bloat of -O0 is likely to kill your link time compared to -O1's slimness.

Also debuggability matters. GCC's `-O2` is quite debuggable once you learn how to work past the possibility of hitting an <optimized out> (going up a frame or dereferencing a casted register is often all you need); this is unlike Clang, which every time I check still gives up entirely.

The real argument is -O1 vs -O2 (since -O1 is a major improvement over -O0 and -O3 is a negligible improvement over -O2) ... I suppose originally I defaulted to -O2 because that's what's generally used by distributions, which compile rarely but run the code often. This differs from development ... but does mean you're staying on the best-tested path (hitting an ICE is pretty common as it is); also, defaulting to -O2 means you know when one of your TUs hits the nasty slowness.

While mostly obsolete now, I have also heard of cases where 32-bit x86 inline asm has difficulty fulfilling constraints under register pressure at low optimization levels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: