Hacker Newsnew | past | comments | ask | show | jobs | submit | cVwEq's commentslogin

OP here. If you are sick of COVID-19 stuff, I'm sorry. I found this useful for calculating the chances that zero/one/two of my partner and I are hospitalized, admitted to the ICU, or dies.

Given our underlying conditions, my rough calculations show a 6/1000 chance one of us is hospitalized, 0/1000 chance (rounded down) that both are hospitalized. Good news since we have kids.

Furthermore, a 1/1000 chance one of us is admitted to the ICU, and 0/1000 (rounded down) one or both of us dies.

Edit: Used a Monte Carlo simulation (n=10000) with a simple probability chain. Caveats and assumptions abound and YMMV, of course.


Here's the study on arxiv [1], if you're more inclined to use that system.

[1] https://arxiv.org/ftp/arxiv/papers/2003/2003.05003.pdf


Downloadable database of historical data by country [1], used for the Johns Hopkins CSSE COVID-19 dashboard [2].

[1] https://github.com/CSSEGISandData/COVID-19

[2] https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594...


"Secondary drowning" is another term people use to describe another drowning complication. It happens if water gets into the lungs. There, it can irritate the lungs’ lining and fluid can build up, causing a condition called pulmonary edema. You’d likely notice your child having trouble breathing right away, and it might get worse over the next 24 hours.

Both events are very rare. They make up only 1%-2% of all drownings, says pediatrician James Orlowski, MD, of Florida Hospital Tampa. [1]

[1] https://www.webmd.com/children/features/secondary-drowning-d...


1%-2% doesn't seem "very rare" to me.


What kind of intuition do you have about 1%-2%? I've a crappy grasp of probability. I'd say 1-2% is like being quite certain that you'll experience this or that within 50-100 tries/repetitions. That sounds rare to me. In case of life or death it's not a risk I'd tolerate, but I'd call it rare.


My intuition comes from other uses of "rare" and "very rare" in medical fields.

"In Europe a disease or disorder is defined as rare when it affects less than 1 in 2000 citizens." [1]

""" In the United States, the Rare Diseases Act of 2002 defines rare disease strictly according to prevalence, specifically "any disease or condition that affects fewer than 200,000 people in the United States", or about 1 in 1,500 people. This definition is essentially the same as that of the Orphan Drug Act of 1983, a federal law that was written to encourage research into rare diseases and possible cures.

In Japan, the legal definition of a rare disease is one that affects fewer than 50,000 patients in Japan, or about 1 in 2,500 people.

However, the European Commission on Public Health defines rare diseases as "life-threatening or chronically debilitating diseases which are of such low prevalence that special combined efforts are needed to address them". The term low prevalence is later defined as generally meaning fewer than 1 in 2,000 people. Diseases that are statistically rare, but not also life-threatening, chronically debilitating, or inadequately treated, are excluded from their definition. """

[1] https://www.eurordis.org/content/what-rare-disease

[2] https://en.wikipedia.org/wiki/Rare_disease


That's 1 in 2000 of the entire population. 1/100,000th of the US population drowns every year, which means that 1% of drownings has an annual incidence of 1/1million.

(This is annual vs lifetime but if you do the rough math under some basic assumptions, you end up easily within what you're defining as "rare")


Even with a 99% probability, there’s about one chance in three that you wouldn’t experience it if you tried a hundred times.


How does that work?


Let's say you roll a dice with 100 sides. If you roll a 1, you die. If you roll anything else, you live. We want to know the probability you will die if you roll the dice 100 times.

One way we could do this is look at the probability you'll roll it on the first roll... then the probability you won't roll it on the first roll but you will on the second roll... and so on. But that's a lot of math.

The probability of an event (death) and its complement (not death) totals 1.0. So one way we can get the probability of death is 1.0 - the probability of life.

Okay, so the only way you'll live if is if survive all 100 rolls. Each dice roll is independent (surviving the first dice roll doesn't affect the second dice roll which doesn't affect the third). So each individual dice roll has probability 0.99 of survival. For joint probability, we can multiply these together. The probability of getting heads on a coin twice is 0.5 * 0.5 = 0.25. So in our situation here, p(survival) = 0.99, and 100 times means 0.99^100, to get the probability of survival. 0.99^100 = 0.36. 36% chance of survival.

The probability of death is thus 1 - 0.36 = 0.64. 64% chance of death.


Counter question. How to calculate how many tries it would take to reach a specific probability for an outcome? For example, how many times do I have to roll the dice to have 90% chance of death? Due to my field of work, I'd solve everything by bruteforce, but I wonder what a more elegant solution could be.


Just solve the equation above.

0.99^n = (1-0.9)

n*log(0.99) = log(0.1)

n = log(0.1)/log(0.99) = ~229


The comment that prompted the question stated the opposite probability - 99% chance of death, not life. That changes the odds quite drastically.


Clear and concise. Thanks!


The chance you don't experience it after hundred times is 0.99^100≈0.37.


Right, but I'm not going to nearly-drown 50-100 times in my life. I'd definitely put this in the "not worth worrying about" category.

Being a human and doing human activities carries a certain amount of risk. If we over-analyze things we end up either being too scared to do anything interesting... and if we start applying this "I"m scared of everything" mentality to parenting we fall into the "helicopter parent" trap which is even worse.


Being scared of being scared is also a thing. If water got into my loved one's lungs, I wouldn't take a 1-2% risk of them suffocating in their sleep. I only gamble with what I'm willing to lose.


Drownings are rare, I would say that 1-2% of something rare is very rare.


Sure. But given someone has drowned, a 1-2% of something happening should result in serious precautions being taken.


Said serious precaution is monitoring by a person.


Indeed, especially considering that this is "1%-2% of all drownings". It says nothing about how many people are suffering from this after a near-drowning and subsequently recover from a near-second-drowning. For all we know more than half of the near-drownings may end up with secondary symptoms.

Also we don't know how many near-drownings vs drownings there are. Lot of good statistical quiz questions in here.


As an aside to the parent poster, an often-stated guideline for the maximum amount you should own in the companies you work for is 10% of your portfolio value [1][2]. Modern portfolio theory (Markowitz, et. al.) calculations for a bundle of assets probably would bear out that 20% in a single stock is not on the efficient frontier [3].

[1] https://www.marketwatch.com/story/dont-invest-in-your-compan...

[2] https://www.forbes.com/sites/maggiemcgrath/2013/10/22/how-mu...

[3] https://en.wikipedia.org/wiki/Efficient_frontier


Thanks for that! Just to be clear, what I meant was, "I wouldn't consider 20% as a target to be indicating he thought the stock was going to crash / was a bad value." I had thought of adding that even 10% or 5% would be reasonable, but I didn't really have any good grounds for saying so other than my totally untrained, amateur intuition. So thanks for the supplement. :-)


Depends how the company is doing. Depending on the internal transparency of the company, as an employee you're often privy to a lot of information that Wall Street does not have access to, eg. you'll oftentimes know their upcoming product pipeline, employee morale, culture, and key metrics for the success of the business that are not published in their financials. If those are doing well but Wall Street is treating the stock poorly, it makes sense to max out your stock compensation and never sell, at least until the metrics turn or you leave the company. (Yes, technically this constitutes insider trading, but it's virtually impossible to prove that inaction was because of inside information vs. because of lazyness or inattention, unless there's a pattern of action.)


As an employee at a megacorp, sure, you can see a little bit about how things are going from the inside. That said, you probably spend less time looking at reports than professional analysts. Secondly, sure, maybe you can see how great your company is, but do you have a frame of reference to a million other companies and their projects and cultures? There's a strong bias towards thinking you know more than the market that I would be wary of.


That depends a lot on your past background. A number of the employees at these megacorps have either worked in finance before, or at one of the big consulting firms, or they've done the Valley dance between the hot growth companies of the moment. So they do have that reference point between many different companies. There's sort of a revolving door among many of the elite institutions that run the world - Ivy League universities, large dominant corporations, big-name consulting, finance, and government.

If this doesn't apply to you, and a megacorp was your first job out of college that you stuck with for your whole career, you may want to be a bit more cautious with your capital.


You forgot to mention that many analysts are full of BS anyway: https://www.inc.com/geoffrey-james/financial-analysts-are-ut...


Even in that case, it may make sense to diversify. The market can remain irrational longer than you can remain solvent.


That saying's usually not true when you're dealing with employee stock compensation, where you own the shares outright and make enough in cash salary to live on. Ownership is forever (modulo a revolution or other forcible overthrow of the rule of law, in which case you have bigger problems). In the absence of leverage, you can afford to be perfectly rational and have an infinite time horizon.


The company I used to work for not only didn't provide stock options or RSUs or (that I can recall) an ESPP for ordinary employees, they eventually took the choice to invest in company stock away from the 401k plan, because they decided it would encourage people to not diversify and they might be considered liable.

I remember when the CEO visited, and while she clearly didn't know that our division existed, the things she talked about highlighted how little we knew of the rest of the company. You have something with tens of thousands of employees, and you have probably quite a few groups of a few hundred people that just have no particular connection to the rest of the company. In our case, we started as an acquisition that was kind of forgotten about.


That not really applicable for "Directors" who are quite different to "employees /workers"


I’m not a big believer in MPT as correlations are unstable, but a simple rule of thumb would be anything over 10% as having significant exposure. With some big tech, there’s some diversification due to different business segments, ex: Amazon ecom/AWS vs recent IPOs where the stock might function more like an option.


Diversification is for people who have no idea what they're investing in. Portfolio theory is spray/pray with no information which is what VCs do with unestablished startups, hoping for the wins to beat the losses. If you want to be that passive then just buy an ETF or all the large-cap blue-chip dividend stocks instead to keep it simple.

Investment funds with a real thesis and research don't do this. Concentrated positions and proper risk management is active investing and generates much greater profits. If you know a sector and company is doing well, diversifying will only reduce your returns.


Diversification can mean different things to different people.

If you mean that >30 stocks is pointless, I agree. How much different is the Dow than the S&P 500 or the whole market, even though its methodology is atrocious?

If you mean that even with a large edge, you should take positions that are >20%, I don't agree.


Then we don't agree. The concentration of a position depends on the confidence of the investment and direction. 20% isn't a magical rule, no point in following it blindly.


I mean, it's like sticking a knife in an electrical socket. Can you get away with it, possibly? Sure. That doesn't mean there's a significant downside to a rule of not doing it.


In theory, you sound good. But, in a 10 year window I have barely seen anyone beating S&P500.


Who do you mean by anyone? I know traders averaging 30% and the best one has done over 100% so far this year, and these are 6 and 7 figure accounts.


Same here, but maybe the difference is that we install Windows 10 Pro whereas the author is a Windows 10 Home user? The article doesn't seem to state the version they are using.

Most annoying is that there are a large number of little bits of functionality that phone home or send information out into the ether: Windows Defender and its submitting of samples, searching via the Windows button, Task Scheduled telemetry items, a plethora of Control Panel privacy settings, etc. etc. etc.

Also annoying on Windows 10 Pro is that the same windows builds have slightly different functionality --- even if the machines have the exact same hardware.

For example, the Search History and Permissions is sometimes named Change the permissions and history of search, even for the same Windows 10 build. It's bizarre.

Also, don't get me started on the intellisense typing when the windows menu is open. (Really Windows, when I press the Start button and then type in Update, you search the web and show me Wikipedia information? And it takes like 5 seconds?)

Don't forget the lame This PC icon...

Sorry, this turned into a cathartic listing of Win 10 grievances. :)


> maybe the difference is that we install Windows 10 Pro whereas the author is a Windows 10 Home user

No, every installation of Windows 10 Pro I've seen had a start menu that was mostly ads (links to Candy Crush, Spotify, Office etc.) - I've also had the OS nag me about giving Edge a second chance.

This is in Germany, in case the region matters, and I always deny all spyware as far as possible using the GUI.


    Get-AppxPackage -AllUsers | Remove-AppxPackage
This is the first thing I run on any Windows box I am forced to interact with.

Removes all the junk in the Start menu in one command.


I just take all the ones I want form this list and you can copy/past them in PowerShell and they execute one after the other.

I would not do what he suggests as that will remove every app.

    To uninstall 3D Builder:
    get-appxpackage *3dbuilder* | remove-appxpackage

    To uninstall Alarms & Clock:
    get-appxpackage *alarms* | remove-appxpackage

    To uninstall App Connector:
    get-appxpackage *appconnector* | remove-appxpackage

    To uninstall App Installer:
    get-appxpackage *appinstaller* | remove-appxpackage

    To uninstall Calendar and Mail apps together:
    get-appxpackage *communicationsapps* | remove-appxpackage

    To uninstall Calculator:
    get-appxpackage *calculator* | remove-appxpackage

    To uninstall Camera:
    get-appxpackage *camera* | remove-appxpackage

    To uninstall Feedback Hub:
    get-appxpackage *feedback* | remove-appxpackage

    To uninstall Get Office:
    get-appxpackage *officehub* | remove-appxpackage

    To uninstall Get Started or Tips:
    get-appxpackage *getstarted* | remove-appxpackage

    To uninstall Get Skype:
    get-appxpackage *skypeapp* | remove-appxpackage

    To uninstall Groove Music:
    get-appxpackage *zunemusic* | remove-appxpackage

    To uninstall Groove Music and Movies & TV apps together:
    get-appxpackage *zune* | remove-appxpackage

    To uninstall Maps:
    get-appxpackage *maps* | remove-appxpackage

    To uninstall Messaging and Skype Video apps together:
    get-appxpackage *messaging* | remove-appxpackage

    To uninstall Microsoft Solitaire Collection:
    get-appxpackage *solitaire* | remove-appxpackage

    To uninstall Microsoft Wallet:
    get-appxpackage *wallet* | remove-appxpackage

    To uninstall Microsoft Wi-Fi:
    get-appxpackage *connectivitystore* | remove-appxpackage

    To uninstall Money:
    get-appxpackage *bingfinance* | remove-appxpackage

    To uninstall Money, News, Sports and Weather apps together:
    get-appxpackage *bing* | remove-appxpackage

    To uninstall Movies & TV:
    get-appxpackage *zunevideo* | remove-appxpackage

    To uninstall News:
    get-appxpackage *bingnews* | remove-appxpackage

    To uninstall OneNote:
    get-appxpackage *onenote* | remove-appxpackage

    To uninstall Paid Wi-Fi & Cellular:
    get-appxpackage *oneconnect* | remove-appxpackage

    To uninstall Paint 3D:
    get-appxpackage *mspaint* | remove-appxpackage

    To uninstall People:
    get-appxpackage *people* | remove-appxpackage

    To uninstall Phone:
    get-appxpackage *commsphone* | remove-appxpackage

    To uninstall Phone Companion:
    get-appxpackage *windowsphone* | remove-appxpackage

    To uninstall Phone and Phone Companion apps together:
    get-appxpackage *phone* | remove-appxpackage

    To uninstall Photos:
    get-appxpackage *photos* | remove-appxpackage

    To uninstall Sports:
    get-appxpackage *bingsports* | remove-appxpackage

    To uninstall Sticky Notes:
    get-appxpackage *sticky* | remove-appxpackage

    To uninstall Sway:
    get-appxpackage *sway* | remove-appxpackage

    To uninstall View 3D:
    get-appxpackage *3d* | remove-appxpackage

    To uninstall Voice Recorder:
    get-appxpackage *soundrecorder* | remove-appxpackage

    To uninstall Weather:
    get-appxpackage *bingweather* | remove-appxpackage

    To uninstall Windows Holographic:
    get-appxpackage *holographic* | remove-appxpackage

    To uninstall Windows Store: (Be very careful!)
    get-appxpackage *windowsstore* | remove-appxpackage

    To uninstall Xbox:
    get-appxpackage *xbox* | remove-appxpackage


It removes nothing of any importance. The terrible WinRT photo viewer can be replaced with the classic one, which is still installed along with the OS and just needs to be enabled:

https://www.tenforums.com/software-apps/8930-windows-photo-v...

I've been running three systems with all APPX junk removed since Windows 10 was first released. Not a single problem on any of them.


Most of those you can right click on and select uninstall now. It takes longer, but it feels safer to me because I have to take a second for each to think if I want it or not.


I think being forced to think about ads for Spotify and Office is exactly what he's trying to avoid.


I was forced to upgrade my workstation to windows 10 and that’s exactly what i was doing yesterday after unsuccesfully trying to remove bloatware via control panel.

I am fumbling in windows 10. I have lost so much muscle memory due to this upgrade. While i know i’ll build it back I’m frustrated that I have to relearn things I was doing with my eyes closed.


Nice list. Powershell doesn't accept & though, works great otherwise.


Doesn't that also remove Calculator?


Yeah don't do this - I did this once and I ended up having to reinstall since it broke enough stuff that the control panel wouldn't even work afterwards.


I've just done it to a VM and Control Panel is still there. I suggest you run:

get-appxpackage -allusers | fl name

and curate the list first and pipe that through remove-appxpackage. Quite a few appx thingies refused to uninstall due to being important! On the other hand so far this VM is working fine and by following a few of the other suggestions from howtogeek eg remove Bing from your start menu, it seems almost usable.


For some reasons I had varied results with this (within the same region). I've had installs where the Start menu was really polluted with all kinds of games, and installs where none of them were present. All of them are usually done from the latest ISO I could download from the MS page.

All of the installs were made on OEM machines (HP, Lenovo) that already came with a Pro license or upgrade option. This was done in a home environment so no AD/enterprise options, and never logging on with MS account. So I wonder if there's the possibility that specific OEMs, models, license keys get the treatment while others do not. I'm not sure if the behavior was tied to particular machines or not since I didn't follow the scientific method while doing installations (will do in the future).


Another major difference probably is people who log in to Microsoft account vs local account.

I don't remember seeing web results in my start menu, is that a Cortana thing?

I suspect that there are also regional differences, more crap being pushed to US consumers


Web results are still toggled, if you do not agree to Microsoft's data processing during installation you won't get that "feature". That said, search is downgraded from 7s. For instance, it doesn't look at start menu folder names nor executable name anymore...


Control panel items not showing in the start menu results anymore when you search is also in Win10 and it drives me crazy. It literally looks like the guys designing windows must be using a Mac themselves, or just aren’t computer guys. I just don’t get how things like than wouldn’t annoy them too.

My guess is that decision making in large companies is so slow, so bureaucratic, that everyone has given up on shipping anything else than a mediocre product, even if people individually would design it very differently if they could.


> It literally looks like the guys designing windows must be using a Mac themselves, or just aren’t computer guys.

Or maybe they're trying to kill the Control Panel all together and switch everyone to the new Settings app, but are aware that doing so immediately would lead to a lot of complaints? So they have both, while slowly nudging you towards using the Settings app so you wouldn't be too angry when you wake up one day in the future and find the Control Panel completely gone.


Except that until the functionalities of the control panel have been replicated in the new settings, we still need a way to access the control panel. Hiding those functionalities from search in between just doesn't make sense at all.


And even when functionally fully replicated, the settings app UI is really questionable, and somewhat hard to use.


This. I think they removed at least half of the items from "Control Panel\All Control Panel Items" compared to Windows 7/8/first builds of 10. In a few years Control Panel will be removed altogether.


I would mind it less if they would actually make the settings app feature complete!


Wikipedia in this situation always uses Edge as the browser. I wonder if this helps inflate their user stats?



Windows 10 Pro has everything this article mentions. Maybe Microsoft doesn’t enable all of the nags depending on where you’re living.


One way to make money is if you sell the bond at a higher price later to another buyer. From the article:

“Why are people buying at negative yields? It is mainly in expectation that you’re going to be able to sell to someone at a higher price later on,” said Andrea Iannelli, investment director, fixed income at Fidelity International. “Whatever the yield you have to assume you’re going to make more on the capital gain than lose on the yield.”

So Y < X, but if you bet you can sell at price Z to another buyer later, Z > X and you profit.

As an analogy I just thought up: it's kind of like overpaying for a house, thinking that in time the house value will appreciate.


Or just paying for a house, thinking it will appreciate. (The "yield" of a house is negative, because it costs money to keep the thing in the same condition you bought it in, as anyone who owns a house knows.)


There are links to two different forms on the File A Claim page [1]: A PDF for adults as of 5/13/2017 [2], and one for minors as of 5/13/2017 [3].

It looks like, if your goals are a balance between receiving the settlement, privacy, and effort, the PDF (probably [2] if no minors were involved) is the way to go.

[1] https://www.equifaxbreachsettlement.com/file-a-claim

[2] https://www.equifaxbreachsettlement.com/admin/services/conne...

[3] https://www.equifaxbreachsettlement.com/admin/services/conne...


Boy, this brings back memories. I was working at Andersen Consulting (AC, then Accenture) at the time our CEO, George Shaheen, left to join Webvan. The general sentiment at AC (at least amongst the grunts in my circle) was that we were glad to see George Shaheen leave, but were all left wondering if we should be jumping ship too, to join the dot-com boom.

The partners at AC were trying to hang on to talent for dear life because of the dot-com boom. When Webvan went bust many of the the partners/associate partners were quick to point that out, maybe as some kind of misguided retention pep-talk or something.

As a funny aside, there were some who used to call George Shaheen "George Unseen:"

http://www.bigtimeconsulting.org/remember-george-4

http://www.bigtimeconsulting.org/ceo-of-the-future-4

http://www.bigtimeconsulting.org/george-unseens-reward-4

http://www.bigtimeconsulting.org/webminivan-a-look-back-4


For all the criticisms of Musk, and recent wackiness, it's moments like these that I thank our lucky stars that there are people like him on this earth. It just seems like these days there are so few people thinking about the long-term future of the human race.

I love it that his solution to the "AI terminator" problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

I love it that he wants to help us kick our addiction to non-renewable fossil fuels.

I love it that he wants to make us a two planet race so that we don't have all our humanity eggs in one planetary basket.

Thank you Elon.


There are, and have been, many many people working on all of these problems long before Elon Musk (the tech announced today at Neuralink is entirely built on work that was done by people at UCSF and UC Berkeley, and even that is an iteration on technology that was developed by scientists over the past several decades). Neuralink the company was founded by eight people other than Musk. It's a huge disservice to all of these people to give Musk the credit for their work.

Musk sits in a weird position where his unique blend of controversy keeps him in the headlines and ensures he gets linked to these technologies, but that does not mean he is responsible for nor deserves the credit. It could be argued that his "ability" to constantly land in the limelight draws more attention (and thus progress) to these issues, but others would argue that we would be even further along if not for the constant controversy he creates.


One speaker is a professor from UCSF who studied the brain's processing of motor signals. He explicitly credited Musk for having the right vision and the long term planning, and that's why he left his position after 16 years to come work with Neuralink.

No one credits Musk for solving bugs with the software on his products, or creating these brain-computer interfaces. But he can assemble the team to do, and motivate them to keep moving and progressing pretty aggressive schedules. And he frequently gives credit to his team (and doesn't sit there patting himself on the back).


> And he frequently gives credit to his team (and doesn't sit there patting himself on the back).

Yes, the list of authors on the whitepaper they released is "Elon Musk & Neuralink". I guess his team should be thankful.

https://www.documentcloud.org/documents/6204648-Neuralink-Wh...


As the other commenters said, that's not a research paper. Here is a link to an actual research paper where (at least some of) the authors work for neuralink.

https://www.cell.com/neuron/fulltext/S0896-6273(18)30993-0


thats a white paper , not the research paper (which is what ppl will read). You can also read it the other way around: He wanted to credit the entire Neuralink team, without claiming to be part if it or leading it


It would have read that way if the author list had simply said 'Neuralink'. He is definitely positioning himself as the leader here.


BioArxiv required at least one human author. We suggested this author list to him and, honestly, we just think it’s awesome.


Actually, the leader is usually the last author. The first author is usually the student doing the gruntwork


Fred Wilson has a blog post where he outlines the role of a CEO like this:

>A CEO does only three things. Sets the overall vision and strategy of the company and communicates it to all stakeholders. Recruits, hires, and retains the very best talent for the company. Makes sure there is always enough cash in the bank.

Based on everything we saw in the Neuralink livestream, it seems like Elon is nailing all three of these. Doesn't mean he deserves all the credit, but it mean he's doing his job.


> He explicitly credited Musk for having the right vision and the long term planning, and that's why he left his position after 16 years to come work with Neuralink.

How do we know that's why, vs his estimation of Musk being the right kind of showman to get a lot of investment.


> How do we know that's why, vs his estimation of Musk being the right kind of showman to get a lot of investment.

Because it is what he explicitly said. As I pointed out in my parent comment.

Of course I'm sure access to capital plays a role. Otherwise it's just someone with a good idea and no money. A meh idea but lots of money also wouldn't attract these kinds of people.


What people explicitly say is their reasoning isn't necessarily their reasoning; especially in an investor/recruiting hype presentation.


Technological R&D doesn't get you anywhere on its own. It's an important prerequisite, sure, but just as necessary is the next step, where a company is formed to commercialize/productize novel research through years of schlepping through market education and government safety trials, to pave the way for the technology to become a "safe" product category for other companies to follow on to. There are many technologies stuck between these two stages—thoroughly "researched and developed", but not yet commercialized.

People like Musk (and the people he co-founds these companies with) are important because they're taking nascent product categories that are "stuck" in the R&D stage with little attention being paid to them, and directing large-scale consumer demand onto them in a way that brings profit-driven industry interest—and therefore industry talent—into the picture. Even if it's not Musk's offering that end up winning the space, these efforts redefine the public perception of the category in a way that means that every company in the space wins.

(For another equivalent example: the creator of Bitcoin did more for smart contracts by creating one platform that lead to competitor platforms that actually had smart-contract support, than a thousand academic smart-contract systems projects ever could have.)


... and there were people working on electric cars and rockets before Musk came along too, but somehow he just manages to nudge things along a lot more than the average person!


Our media perpetuates and encourages erratic behavior. People like/love Musk because he does it, for science!

I personally have no problem with him.


I am also an optimist and a techno-utopian...

BUT:

1. This has not been tested on a single human yet, as it has no FDA approval.

2. Preliminary trials in full quadripilegic patients are several away (these are also not yet approved)

3. Should these trials succeed, this will still not be available as an elective procedure for healthy people (that will take much more time)

3. The skull exists and is a hard barrier that is not going away. A decade or so from now, should this be approved as an elective procedure, patients will have to have a hole drilled in their skull (note that most people find LASIK invasive, even after decades of successful surgeries)

4. Patients will also have to become comfortable with thousands of fibers being inserted (albeit in a minimally invasive way) through brain tissue by an automated surgical robot.

5. Should the procedure be successful, patients should finally, at long last, be able to control a mouse, or keyboard, or smartphone using their brain and imagining the movements instead of using their hands.

There is perhaps, a cyberpunk future where crime syndicates mine Bitcoin in the brains of their victims, where malware pipes gigabytes of extremist political memes in seconds through the dorsolateral prefrontal cortex of young adults.

Maybe that will come one day, but this technology is only using the signals generated by the brain to control a mouse and keyboard. This existed twenty years ago in chimpanzee studies. The real innovation here is in materials science and surgery.

This is amazing multi-disciplinary science in the pursuit of advanced medicine, and we should be applauding it for what it is.

So, thank you Elon for funding this -- but more importantly, thanks to all the scientists, researchers, and engineers who have dedicated their lives the advancement of our science and medicine.

I will not be electing to undergo this surgery in the future.


The applications are very very speculative and far-reaching. I think, by the time the applications are feasible there will probably be a way to do minimally invasive craniectomy. The neural implant is impressive work, but anything beyond that is probably going to be very different than is speculated.


> There is perhaps, a cyberpunk future where crime syndicates mine Bitcoin in the brains of their victims, where malware pipes gigabytes of extremist political memes in seconds through the dorsolateral prefrontal cortex of young adults.

Not bad, man. I'd read that book.


> I love it that his solution to the "AI terminator" problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

This seems like making a well-intentioned medical application that incorporates latest research findings and likely addresses historical downsides of the field (e.g. scarring issues with long term deployment of invasive BCI).

I don't see how that is anywhere close to being related to some sci-fi "AI terminator" scenario though. If you want to go into some cyberpunk fanfic about Musk you can just turn this application around and spin a "AI is now able to fry our brains out" narrative, which is neither helpful, nor realistic. This AI FUD is so weird to me, it's much more likely to be killed by badly written auto pilot for fancy cars, a failed operation to get your brain USB plug, a malicious application of AI by companies or state actors in areas like mass surveillance and population control... than it is for a real, strong AI to suddenly emerge, become sentient and decide that humanity cannot be trusted.


What evidence is there that AI is going to "take off" and threaten humanity somehow? How are people imagining this process would happen?


The reverse argument made here is usually the turkey fallacy. For turkey, all logical evidence points to a continuously improving quality of life, with every need met and a constant availability of food. There’s no evidence that it’s going to be eaten this thanksgiving, so any effort in building turkey-computer neural links is dismissed off hand as being a waste of time.


How does this analogy apply specifically to AI, though? There's also no evidence that God is going to come and pronounce his judgement on us, so any effort in prayer and pious living is often dismissed off hand as being a waste of time. Should non-believers in God reconsider their ways given their knowledge of the turkey fallacy?


Think that’s the premise of Pascal’s wager, that if you simply multiply cost with expectation believing in God is a better bet.

Of course with something like a general AI all bets are off. This neural link think is a horribly bad defense, because of all possible defenses this is the one that could give the AI a direct connection with your brain.


This is mostly a media hype IMO, but I have a bias as I graduated in machine learning and still work in AI.

For an analysis of the state of the field and the surrounding media attention, I highly recommend this blog post by Zachary Lipton [1].

[1] http://approximatelycorrect.com/2017/03/28/the-ai-misinforma...


There are two ways:

1. paperclip optimizers where a very smart computer you tell to do one menial task like producing as many paperclips as possible or proving a mathematical theorem can turn into a catastrophy as that computer turns all iron on earth into paperclips or into computers that all try to find a solution to the theorem. This also includes computers that we task to "protect" humanity coming to the conclusion that humans having power to kill each other is mankind's biggest threat.

2. crazy would-be dictator who wants to rule over the world and tells an AI to do it or kill all humans or something else.

TLDR: First way: forgetting machines to tell to not kill humans (or not doing it in an effective manner). Second way: some really shit individual explicitly telling machines to kill humans.

The first danger is one we already face: basically since we've had machines there have been accidents with them, also ones involving casualties. In general, the more we care about avoiding casualties the less likely they are. However, it only takes one super intelligent paperclip optimizer to "break lose" so given the high amount of possible casualties, there needs to be a lot of care taken to prevent even one such event.

The second danger needs to be coped as well. One could do two things: very slow deployment of super-AI capabilities at the start, while building AIs that can defend governments and somehow encoding into them how the government works (to prevent parts of the government from using that machine in a coup). The same computers will prevent revolutions though, so I guess we'll see less and less of those. You can think of variations of those ideas like AIs that only enforce asimov's laws or only make sure that we don't use any weapons more powerful than $weapon on each other.

What I don't understand though is how neuralink will help with coping with those threats.


1. Unplug the paperclip optimizer. Blow it up. The problem with the less wrong idea is they keep ascribing more and more godlike powers to AI to counter very common objections to technology. Somehow the entire thing becomes a godzilla like self-sustaining organism that ignores anything we can do or throw at it, and has magical powers. Meanwhile it seems apparently tha major websites can have outages if people go on summer vacation and the interns are on duty.

2. They can do that now. What would an AI do differently that couldn't be accomplished by conventional weapons? How would it do so without using said weapons or any sort of thing that could be done so without it?

The AI thing is just a secular form of the rapture, a particular variant of existential dread for people with little to no religious belief.


> Somehow the entire thing becomes a godzilla like self-sustaining organism that ignores anything we can do or throw at it, and has magical powers. Meanwhile it seems apparently tha major websites can have outages if people go on summer vacation and the interns are on duty.

Sure, the risk is low right now, but the more powerful computers we can build, the larger the potential risk is. Before you manage to press the off button the computer might already have deployed a bioagent or killed countless lives with drones.

> They can do that now. What would an AI do differently that couldn't be accomplished by conventional weapons?

A military made out of humans is subject to human failings. It is generally a big problem that soilders shoot in the general direction of the enemy to not get punishments for not shooting but miss on purpose. As an extreme example, the nazis had to give lots of free alcohol to their soilders so that they'd continue shooting civilians and burying them under new bodies before they have even died. They later invented gas chambers as an easier method to kill masses of people. Compared to humans, an AI is doing what it is being told to. If you tell it "Kill all humans" it will do it.



Same guy that doesn't know Hume Guillotine(1) or anything about philosophy and is a charlatan with his meditation app

(1)https://youtube.com/watch?v=wxalrwPNkNI


https://samharris.org/response-to-critics-of-the-moral-lands...

Do you have anything more substantial to say than ad hominem? Like a response to the video I linked instead of grinding your unrelated axe against the guy?


The title of the book: How Science Can Determine Human Values is literally in contradiction with Hume's Guillotine which any philosophy 101 should be aware of.

>Do you have anything more substantial to say than ad hominem?

Nope, because the title of the book says it all


I'm interested in hearing more about about being a charlatan with the meditation app especially considering, as far as I remember, you can get it for free by just asking.


There's two types of people you meet who are into mindfulness: high practicioners (monks) and yoga guy from Los Angeles who is "kinda" into mindfulness but not really


I can only go by what he and other people close to him say but he says he used to do plenty acid and been to retreats in asia for months and months (cumulatively) during his early life and seems to be good pals with people like Joseph Goldstein (who studied under asian teachers in 60s/70s). He has probably experienced all kinds of stuff.

Point being, if you get (at least some of) what there is to get then does it matter where your body was born or what it looks like? Is it a bad thing that western born people are bringing this (buddhist/hindu/jain) thought to the west?

I would revise your statement that there are the monastics who dedicate their lives to this, the lay people who practice and the commoditized 'yoga' as exercise/stretching folk/peddlers who are far removed from its spiritual components.


Eh, if you use the word "mindfulness," you are yoga guy. A monk isn't mindful, he is mortifying his flesh to practice the tenets of the religion he believes in so fiercely enough he is willing to self-imprison to follow it better. What you see as mindfulness is just the surface results of winning that struggle. It is very possible to lose it instead, and monks are often open about the dangers of monastic life.

I think people really don't get religion in this sense. The radical, wild, anarchic aspects of it. Mindfulness is more just a wish for stoicism in religious guise; the idea of being not stoic, and weeping over your prayers in a cell because you feel the weight of the world's sin and know that the time is short will not often occur to people.


>Eh, if you use the word "mindfulness," you are yoga guy

Everything in Buddhism and meditation surrounds around mindfulness/sati/awareness you call it.

>A monk isn't mindful, he is mortifying his flesh to practice the tenets of the religion he believes in so fiercely enough he is willing to self-imprison to follow it better.

Monks have to cultivate the 8 fold path which includes right mindfulness so saying he isn't doesn't make him monk. And also wow that sounds so disrespectful and ignorant.


What exactly is wrong with being a regular person who practices mindfulness? Are the benefits they receive not legitimate in your eyes?

The philosophy I have been exposed to through meditation has helped me better understand how the ego can cause problems. It seems you are rather attached to the idea of a very pure, austere study of meditation and associated philosophies. There are other valid ways of approaching such things that you are unjustifiably disregarding.

Alternatively, you could look at it as someone simply being earlier on their path, and provide encouragement instead of ridicule.


>What exactly is wrong with being a regular person who practices mindfulness

Completely normal.

Sam Harris is a charlatan for preaching it using "his program": https://www.goodreads.com/book/show/18774981-waking-up

Spare me this book under 4 stars rating.


> Completely normal

Ok, good to hear. That's not the impression I got from your comment about the LA yoga guy.

> Sam Harris is a charlatan for preaching it using "his program": https://www.goodreads.com/book/show/18774981-waking-up

What's wrong with the book? I read it and thought it was, on the whole, interesting and useful. Obviously it isn't perfect.

How specifically is Sam a charlatan? What falsehoods does he claim about himself regarding meditation?


>problem is making a brain-computer interface so that we can have a fighting chance when AI takes off.

our fighting chance is EMP.


That's like saying a lions chance against humans is the big teeth. It's dangerous in a particular context, sure, but misses the fundamental assymmetry that took humans from scared or lions to existential threat to lions.


what is EMP?


Apparently Paul Allen's Experience Music Project has been a secret weapon against malevolent AIs all along!

Or if you're no fun, it's an electromagnetic pulse.


An electromagnetic pulse, created when a nuclear bomb explodes, that some say will destroy all electronic devices within hundreds of miles that haven't been specifically designed to be EMP resistant.


> so that we can have a fighting chance when AI takes off.

there wouldn't be a need for this without rampant, myopic introduction of AI. why not just stop that irresponsible "innovation"?


These sorts of criticisms always ignore the prisoner's delima for the sake of expression of moral indignation.

There is no stopping individual agents in a system from doing what helps them most without an authoritarian at the top. Mostly, those authoritarians come with even worse problems so we're left with this imperfect world.

I'd love to see comments on hn not focused on self righteousness and instead realize that there is no one guy at the top that you just have to scream really loud at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: