Hacker Newsnew | past | comments | ask | show | jobs | submit | tntn's commentslogin

"Urban–Rural Differences in Suicide in the State of Maryland: The Role of Firearms"

https://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.2017....

> Conclusions. Male firearm use drives the increased rate of suicide in rural areas


HN title is not good. Original title is "U.S. Life Expectancy Drops for Third Year in a Row, Reflecting Rising Drug Overdoses, Suicides."

The comparison is between the last three years and 1915-1918, but the Spanish flu was just getting going in 1918.

EDIT: folks just arriving, the HN title has changed since I commented. It previously said "worst trend since Spanish flu," or something.


The title is fine, and reflects the factual subtitle immediately below the headline. There is no requirement to post the article headline as-is on HN.


But it doesn't reflect the factual subtitle. The title here says 1918, but the true trend is 1915-1918, which had a bit of a war as well.

> There is no requirement to post the article headline as-is on HN.

There is. From the guidelines:

> Otherwise [i.e. not excessive clickbait] please use the original title, unless it is misleading or linkbait; don't editorialize.


Apologies, you're correct. I think focusing on the year or years will bury the lede in any case.


It is commonly known as "the 1918 Spanish influenza pandemic"

It's a title, not a series of facts


The submitted title was "Drop in US life expectancy trend not seen since 1918 influenza epidemic". A moderator changed it because it broke the site guideline: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html

Submitters: Cherry-picking a detail from an article is editorializing, so please don't do that. If you want to say what you think is important about the topic, please do so in a comment, where your view is on a level playing field with everyone else's.

https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


> Spanish flu of 1918

It was not Spanish. It's origins come from the battlefields of WWI.


Everyone knows it as the Spanish flu.. it's called like that because the Spanish newspapers didn't censor these deaths like the other countries that were busy with World War I


Next you're probably going to tell me about French fries, right?


Freedom Fries


Hawaiian pizza does not come from Hawaii, and yet that is still its proper name (in English, at least.)


> I wrapped my objects in atomic reference counters, and wrapped my pixel buffer in a mutex

Rust people, is there a way to tell the compiler that each thread gets its own elements? Do you really have to either (unnecessarily) add a lock or reach for unsafe?


It's a library, so only half an answer to your question, but there's a fantastic library called rayon[1] created by one of the core contributors the the Rust language itself, Niko Matsakis. It lets you use Rust's iterator API to do extremely easy parallelism:

  list.iter().map(<some_fn>)
becomes:

  list.par_iter().map(<some_fn>)
Seeing as in the original example code, the final copies into the minifb have to be sequential due to the lock anyway, all the usage of synchronization primitives and in fact the whole loop could be replaced with something like:

  let rendered = buffers.par_iter().map(<rendering>).collect();  
  for buffer in rendered.iter() {  
    // The copy from the article  
  }
I've not written much Rust in a while, so maybe the state of the art is different now, but there are a lot of ways to avoid having to reach specifically for synchronization primitives.


Yes, there's `chunks_mut` [0] in the standard library that separate the slice to multiple non-overlapped chunks.

[0]: https://doc.rust-lang.org/std/primitive.slice.html#method.ch...


If you want to use completely safe Rust, you could probably get the Vec<u32> as a `&mut [u32]`, then use `.split_at()` on the slice to chop up the buffer into multiple contiguous sub-pieces for each thread. Collect up those pieces behind a struct for easier usage. It would cost you an extra pointer + length for each subpiece, but that's the price for guaranteeing that no thread reaches outside the contiguous intervals assigned to it.

EDIT: As mentioned by a sibling, `chunks_mut` is probably closer to what you want in this instance. If you have to get chunks of various sizes -- for instance, if the number of threads doesn't evenly divide the buffer into nice uniform tiles -- you'd need to drop down to the `split_at` level anyway.


> Rust people, is there a way to tell the compiler that each thread gets its own elements?

That's what `local_pixels` does in the post. Where things get trickier is when you want to share write access to a single shared buffer in a non-overlapping way (e.g. `buffer` in the post.) To do this you need to either resort to unsafe, or to prove to the compiler that the writes aren't overlapping. One way to do the latter this is to get a slice (which Vec is convertable into), and then split up that slice (which the standard library has plenty of methods for: https://doc.rust-lang.org/std/slice/index.html ), and then give each thread those non-overlapping slices.


Yes. Instead of having a single slice of pixels, split it into n slices, one for each thread.


I wonder if there’s a way to borrow noncontiguous slices.


Yes, the standard library has many methods for splitting up a single mutable slice into multiple non-overlapping mutable slices. There's split_at_mut() which just splits at an index, or split_mut() which splits using a predicate, or chunks() which gives you an iterator over non-overlapping subslices of a given length, and more.



unsafe is the tool u are looking for.


No, there are plenty of safe ways to achieve this in the standard library. The chunks and split families of functions on slices are all designed to do pretty much exactly this.


In fairness to GP, they are implemented using unsafe (which is unsurprising since they take one &mut and return two to the same borrowed data).


If you go by that definition, I think you’ll eventually find out that everything depends on unsafe, and thus nothing is actually safe

Which isn’t a very useful distinction


My comment really upset folks, https://doc.rust-lang.org/src/core/slice/mod.rs.html#991-100...

unsafe is the mechanism that GP needs to use to get multiple contiguous mutable borrows.

There is nothing wrong with unsafe, it is used to build all of the safe abstractions in Rust.


Everyone here is aware that split\* and chunks\* are built using unsafe. However, reaching for unsafe yourself in this situation is explicitly the wrong thing to do.

The entire point of rust's safety system is that it is possible to build safe things on unsafe foundations because the unsafety can be encapsulated into functions and types that can only be used safely. The safety of these functions then depends on them being bug-free, and the best way this is achieved is by minimizing the total amount of unsafe code in the ecosystem, and sharing it in widely used libraries so that there are enough users and testing to find the bugs.

So no, unsafe is not the mechanism GP needs to, or should use, because the split\* and chunks\* families of functions already exist and do exactly what he wants.


:(


All of Rust's safe abstractions are on top of unsafe. It isn't a bad thing, it just need to be used with rigor.

Splitting a single slice into two mutable slices is done via https://doc.rust-lang.org/std/primitive.slice.html#method.sp... if you want more than that, you will need to roll your own.

I think it would be a great exercise to implement what you are asking for, the docs link directly to the source.

https://doc.rust-lang.org/src/core/slice/mod.rs.html#991-100...


AMD switched to TAGE for Zen 2, so I don't know if neural networks are "the future of branch prediction" or just a neat diversion.



what is the point of linking to submissions with no discussion.


People get angry that their submission didn’t get them the worthless internet points and this one did.


when people look over my coworkers shoulder and see his 23K HN point count at the top of his screen, they acknowledge hes calling the shots in code reviews.


Are you serious? This sounds like an awful workplace.


lol no im joking, but that would be hilarious if true


Why - because he's got nothing better to do than post on HN all day?


Sadly, HN doesn't provide the far more interesting statistics of average karma per post and submission, respectively.


EDIT: I'm too old for this. Think what you want about mmap and ioctl.


My point is that ioctl and mmap disprove the "Unix Philosophy" idea that files are one-dimensional streams of characters that are all you need to plug programs together.


Do they though?

Everything as a file is an abstraction, an enormously useful abstraction. The point isn't that it has to behave like a file in every way. At the C level, the fact that open(), read(), write(), or e/poll() behave the similarly is extremely helpful.

I can have a whole suite of functions that do not care what type of file descriptor they hold, but just that those functions behave in a predictable way. Maybe only 10% of my code has to care about the differences between them, instead of 70%.


Except sockets, Sys V IPC, GPGPUs, better then?


BSD Sockets were never part of "UNIX Philosophy", they are an ugly quick'n'dirty port/rewrite of code from totally alien system (TOPS-20) because DoD had to deal with vendor abandonment


And in the 35 years since, despite multiple new kernels and userlands and languages, a more unix-file-like socket API hasn't become popular. I'm not sure what does that tells us, but it's not nothing.


Interestingly enough, Go's network functions, by virtue of being native to Plan9, actually are based around Plan9's file-based network API. It works pretty nicely, though "everything is a file stream" has its issues.

Government, Politics, resistance to change, NIH syndrome, "us vs them" and a bunch of other issues all conspired to keep BSD Sockets alive.

The first is the origin of BSD Sockets at all - UCB got paid by the government to port TCP/IP to Unix and, AFAIK, provide it royalty-free to everyone, because DoD needed a replacement for TOPS-20 as fast as possible and widely available, and there were tons of new Unix machines on anything that could get paging up (and some that couldn't).

Then you have the part where TLI/XTI ends up in Unix Wars, associated with slow implementations (STREAMS), despite being superior interface. NIH syndrome also struck in IETF which left us with the issues in IPv6 development and defensiveness against better interfaces than BSD Sockets because those tended to be associated with the "evil enemy OSI" (a polish joke gets lost there, due to "osi" easily being turned into "axis").

Finally, you have a slapped together port of some features from XTI that forms "getaddrinfo", but doesn't get much use for years after introduction (1998) so when you learn programming BSD Sockets years later you still need to manually handle IPv4/IPv6 because no one is passing knowledge around.


What new kernels? The three major OSes are two UNIX based, and one doing its own thing.

I don't think it's a great investment in time redesigning the whole socket API since you need to keep the old one around, unless you want to lose 35 years of source code compatibility.

The BSD socket API can definitely and easily be improved and redesigned, if only there was some new players in the field that wanted to drop POSIX and C compatibility and try something new.


> What new kernels? The three major OSes are two UNIX based, and one doing its own thing.

I meant that many new and forked UNIX-like kernels have been written over the past 35 years. Just the successful ones (for their time) include at least three BSDs, OSX, several commercial UNIXes (AIX, IRIX, Solaris, UnixWare...), and many others I'm unfamiliar with (e.g. embedded ones).

It's common to add new and better APIs while keeping old ones for compatibility. Sockets are a userland API, so if everyone had adopted a different one 20 years ago, the original API could probably be implemented in a userland shim with only a small performance hit.

However, a new file-based paradigm from sockets would probably work better with kernel involvement; that's why I mentioned kernels. We've seen many experiments in pretty much every other important API straddling kernel and userland. IPC, device management, filesystems, async/concurrent IO, you name it. Some succeeded, others failed. Why are there no widely successful, file-based alternatives to BSD sockets? The only one I know firsthand is /dev/tcp and that's a Bash internal.


Except the part that BSD is one of the surviving UNIXes.


Which should drive home the point that "everything is a file" does not accurately describe the reality of the Unix "philosophy".


If none of the Unix shells are remotely compliant with the "Unix Philosophy", then what is the "Unix Philosophy" other than inaccurate after the fact rationalization and bullshit?

Name one shell that's "Unix Philosophy Compliant". To the extent that they do comply to the "Unix Philosophy" by omitting crucial features like floating point and practical string manipulation, they actually SUFFER and INTRODUCE unnecessary complexity and enormous overhead.


> "If none of the Unix shells are remotely compliant with the "Unix Philosophy", then what is the "Unix Philosophy" other than inaccurate after the fact rationalization and bullshit?"

Precisely that! It's post-hoc rationalization and bullshit, that's the point I was driving at.


The copy on https://comma.ai/ has certainly changed tone from what it once was.

Now it is clearing pushing a driver-assist angle ("improves your stock ACC and LKAS," "copilot," "augment"). No mention of "self-driving," "autonomous," or the like.

Previously, it was clearly marketing a full-self-driving, no human involved system: "ghostriding for the masses," "software to make your car self driving."


It could be for liability reasons.


> My other option was to translate it into real assembly

I wrote a compiler from emoji-code to amd64 (mostly because I'm more interested in compilers than reversing). It runs quite fast - prints the whole domain in ~1 min. I'd highly recommend it to people who are into assembly, it was a fun exercise.


How did you implement the JUMP_TOP instruction? You need to jump to the x86_64 instructions that correspond to the given emoji index; did you implement a jump table?


Yeah, I put labels corresponding to the original IP throughout and used a jump table.


That's pretty cool! I just transliterated the instructions into C macros; but i didn't bother with the jump tables. The nice thing with this approach is that you can mix vm instructions with c code freely; and get gdb support. I needed that because speeding up via C wasnt enough to decode the full URL and I still needed to do additional reversing.

Was your method fast enough to get all three parts of the URL?


It produces the full domain name (up to .com) in ~ 1 minute. If there is more to the url (a path, ?= parameters, etc) after the domain name, then no.


would you mind sharing this? I'd love to check it out :)



    >>> import autograd.numpy as np
    >>> from autograd import grad
    >>> def fn(x):
    ...   return np.power(np.power(x, 3), 1/3)
    ...
    >>> gradfn = grad(fn)
    >>> gradfn(0.0)
    /usr/local/lib/python3.6/dist-packages/autograd/numpy/numpy_vjps.py:59:RuntimeWarning: divide by zero encountered in double_scalars
    lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    /usr/local/lib/python3.6/dist-packages/autograd/numpy/numpy_vjps.py:59: RuntimeWarning: invalid value encountered in double_scalars
     lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    nan
… Damn.

    >>> import torch
    >>> x = torch.tensor(0.0, requires_grad=True)
    >>> y = ((x**3) ** (1/3))
    >>> y.backward()
    >>> x.grad
    tensor(nan)
… Damn.


eps^3 is undefined


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: