Hacker Newsnew | past | comments | ask | show | jobs | submit | robinh's commentslogin


The color red is often associated with bad; needs to stop; wrong. Removal isn't such a big step from there, I think. The opposite probably applies to green.


Right, that's the point. Removal shouldn't be thought of as "bad" and the red/green == bad/good association does seem to be fairly deep.


I suppose it's an OK sentiment, but I'm not in contact with my parents and on days like these I keep getting reminded of that fact. I was secretly hoping there wouldn't be anything about it on HN, but I guess I'll just have to learn to deal with it.

<goes back to reading a book in pyjamas, today is not a very good day. />


It is an OK sentiment but not all of us have fond memories of our mothers. Mother's Day for me, only serves as a reminder of just how shitty my own mom was...


Best of luck to you today, too.


I'm sorry that for you reading a book in pyjamas is not a good day.

Wait, actually I'm envious of how many days you must have that are even better than reading a book in pyjamas.


Ha, fairy nuff. It's what prompts one to have such a day that matters, methinks. ;)


good luck today


Thank you. :)


I suppose it's an OK sentiment, but I don't know how to program and on days like these I keep getting reminded of the fact. I was secretly hoping there wouldn't be anything about it on HN, but I guess I'll just have to learn to deal with it.

Nothing is easy. Take the first step to remedy the situation before you regret it.


I'm pretty sure that's not even remotely related.


Woah, what's with all the downvotes here?


It was a 'clever' snarky post equating 'not seeing programming' on HN vs 'not seeing "call your mother"' on HN, which we can reasonably expect to not see.

In the rush to be clever, we got a really snarky post from someone who has no knowledge of the parent comment's context.

Not everyone has a great relationship with their parents, that doesn't make it the child's fault like super-counselor above likes to think.


I agree that concurrency isn't fundamentally hard, but the reason everyone kind of prefers faster serial execution is because a lot of algorithms (extremely simple example: f^100000(x)) are fundamentally unparallelizable. Faster serial execution is just so much more straightforward.

So a common problem with concurrency tends to be not "How do I make these functions run in parallel", but "Is there an algorithm that does the same thing I want without relying on constant function composition?"


So... I know next to nothing about parapsychology itself, but have seen a lot of the drama around it, and I just have to ask: is it not intellectually dishonest to call something the 'control group of science' when their results overwhelmingly support the hypothesis? I have a hard time seeing how this is different from any other form of science denialism. "We don't like the results because they clash with our preconceived notions of how the universe works so we made up this thing to ignore your evidence"? That's hardly a valid complaint. Basically, on what grounds can people claim one field to be nonsense (e.g. calling parapsychology the control group of science) but not others? Can someone explain this to me?


It's a valid question. The article takes the following premise: Parapsychology gets positive results following the scientific method as defined by prevailing norms in the scientific community. So why can't we draw the conclusion that parapsychology is as legitimate as other branches of science? What is to separate it?

I would offer a pseudo-Bayesian[1] answer to that question. Parapsychology aims to prove hypotheses that lacks theoretical foundations. Our current understanding of physics and biology weigh strongly against the existence of psychic phenomena. Even before any experiment is conducted, we must admit that psychic phenomena are unlikely to exist. Our experimental results must be evaluated in light of that prior probability.

Thus, parapsychology is and should be held to a higher burden of proof than other branches of science. We should demand more rigorous experimental designs, stronger effects, and smaller p values. This XKCD presents a similar idea, if you substitute "psychic phenomena exist" for "the sun has gone nova":

http://www.explainxkcd.com/wiki/index.php?title=1132:_Freque...

[1] I say "pseudo" because I'm not a statistician by trade. I'm basing my argument on my rather superficial understanding of Bayesian statistics. I still think it's a valid argument in its own right, but I don't claim that it's an accurate representation of Bayesian statistics.


Parapsychology aims to prove hypotheses that lacks theoretical foundations.

This is the lynch pin for most scientists. Before you can have a hypothesis, you must have a theory, and you can work to prove or disprove that theory by experimenting to create or observe results that the theory predicts. Parapsychology doesn't have good, testable, theories. Rather it has some interesting unexplained correlations.


Doctors rejected hand washing for a long time because there was no theory behind why it would improve patient care, even when there was overwhelming evidence showing it decreased mortality rates.

Do you discount all evidence that doesn't fit into your world view? Maybe theory hasn't caught up with evidence yet.


The answer of course is no. The degree to which we think its shit is related to how much we know about related fields and the size of the effect among others. The prior isn't binary.

For the topic at hand, ESP would most likely invalidate quite a bit of physics no one is questioning for other reasons and there is no proposed theory to explain the effects. Our confidence in the studies is rightfully close to zero.


> Doctors rejected hand washing for a long time because there was no theory behind why it would improve patient care

The germ theory of disease and experiments supporting the theory predate medical sanitation. (Pasteur's work did come after, but he wasn't the first.) The medical community's initial rejection of handwashing was not due to a scientifically motivated demand for a sound theory. Rather, it was due in large part to doctors' unwillingness to believe that they were the ones spreading disease from patient to patient. Additionally, the medical community at the time did not embrace the scientific method to the extent that it does today. Had it, handwashing would have been evaluated in a controlled study and proven effective.


http://en.wikipedia.org/wiki/Ignaz_Semmelweis

> Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings.

He published a lot on the subject and was rejected because it didn't fit in with how doctors saw the world and he had no theory to back up his findings. It wasn't until Pasteur that germ theory gained any widespread acceptance so it's pretty irrelevant that other people thought of it first.

Plenty of things are evaluated in controlled studies and still rejected today. For example, the article that you presumably just read discusses one such thing.


Individual experiments give false positives and false negatives all the time. Think about how you might film this using a fair coin: http://www.youtube.com/watch?v=X1uJD1O3L08 Hint 10 heads is only a 1/1024 chance but 1024 is not that rare. Now realise that 5 coin flips is a 1/32 chance and 1/20 is considered acceptable to publish...

Science is not based on a single experiment it's based around replication of experiments by different people at different times using slightly different methods. Parapsychology repeatedly fails this test to the point where there is a million dollar prize for anyone that can demonstrate anything in a controlled setting.


The process to win Randi's prize is not science, it is public relations for the ideology of elimination materialism.

Science is the testing of a hypothesis by applying an instrumental injunction to reality, apprehending and analyzing the results, and sharing/verifying the results in with a community of experts who have also performed the same injunction (reproducibility).

Randi's process shares none of these characteristics with the scientific process. It is entertainment, a publicity stunt that has little to do with science other than to muddle the waters when discussing these topics.


There are two differences. The first is that you have to replace "preconceived notions of how the universe works" with "vast body of experimental data and scientific understanding of how the universe works". Our evidence for how physics works vastly outweighs current parapsychology research results. The second difference is that the evidence is not being ignored (at least by the author of the article), but is being taken to indicate a real effect, it just isn't the effect of psychic powers. Our knowledge of physics means that the existence of psychic powers is given a much lower prior probability than the possibility of widespread experimental error and bias. Since both explanations are likely to produce slight positive results, the existence of psychic powers is still unlikely once you take the evidence into account.

However, the important point in the article is that in order to make this inference in an intellectually honest way, you need to significantly increase your estimation for the probabilities of widespread experimental error in ALL scientific studies that use similar methods. Since the methods in parapsychology are pretty good, this has quite a far reaching effect.

In a sense, your question about "on what grounds can people claim one field to be nonsense but not others?" is exactly the same question asked by the author. Except the author isn't implying that parapsychology isn't nonsense, they are implying that many other fields are nonsense too. This makes sense because physics is almost certainly not nonsense.


One man's modus ponens is another man's modus tollens (http://www.gwern.net/Prediction%20markets#modus-tollens-vs-m...).


Have you read the article? It's basically all along this problem...

tl;dr: Yes, that is intellectually dishonest, which is a huge problem. Scientists have two options: (1) accept parapsychology as real, or (2) accept that the "scientific method" (in social "sciences", at least) is insufficient. The problem seems to be that Bem (the author of the study) did almost everything "right", and if we increase the bar of scientific proof so that his study is excluded, so are many others...

The reason why parapsychology is excluded: the real world doesn't support it (noone is earning huge amounts of money on the stock market using psi). However, the real world often doesn't support very small effects (e.g. Einstein's relativity). Fortunately for physics, they can afford to make their scientific methods much more rigorous, so they can study large effects as well as exceptionally tiny effects. Social studies can't, for the time being.


Okay, but that's not actually an answer to my question. My question basically comes down to this:

> Yes, that is intellectually dishonest, which is a huge problem. Scientists have two options: (1) accept parapsychology as real, or (2) accept that the "scientific method" (in social "sciences", at least) is insufficient.

I don't get why the whole thing is such a huge problem. The entire problem rests on needing parapsychology effects to not be real. If that need did not exist, we could just go "Okay, interesting, seems likely that there's something to it then. Let's do more research!" because, you know, we take that approach everywhere else. So my question remains: what is it about parapsychology that makes option two even valid to consider? All I can see is people just not liking that that may be how things work.


Parapsychology is physically impossible, and the evidentiary standards in physics are much higher, so we have much more confidence in our physics results than in these experiments - enough that we can reasonably say that physics is true and parapsychology is false.

(But these experiments are as good as many in psychology / social science - suggesting that many "proven" results in psychology / social science could be false)


> Parapsychology is physically impossible, and the evidentiary standards in physics are much higher, so we have much more confidence in our physics results than in these experiments

Conflicting results don't mean one set is impossible (cf., the long-standing apparent conflict between QM and relativity within physics). Apparently conflicting results without a methodological error in either imply that the explanatory model that appears to be supported by at least one of the results (if not both) is, while useful within its own domain, in some way incorrect.

The whole idea that the models validated by scientific experimentation are binarily true of false is, well, missing the point badly. While over time we hope they approach truth with them, what they are is useful (that is, they have predictive power) to a greater or lesser extent. And quite often the models with the greatest predictive power in two different domains conflict when either or both are extend outside of their own domain.

EDIT: The real problem with parapsychology is that there's little in the way of explanatory models being tested anywhere in the field. There's a lot of hypotheses without models and some experiments testing them, which (concerns about methodology aside, for the moment) might raise interesting questions and serve as inspiration for developing and then testing theoretical models to explain the effects, but very little has been done there -- which makes "parapsychology" more a collection of potentially unexplained phenomena more than a branch of science that provides an explanatory model for some set of phenomena.

Which is very different from most of the social sciences.


> Apparently conflicting results without a methodological error in either imply that the explanatory model that appears to be supported by at least one of the results (if not both) is, while useful within its own domain, in some way incorrect.

When scientist thought they found particles travellig faster than light speed they checked the results, then the equipment, and then they assumed they had made a mistake and asked other people to check the numbers and the experiment. They realised that they had an extraordinary result and they wanted very high degree of rigour.

Some parapsychologists appear to rush to publish weak results and to claim success for flawed experiments.


I'm reminded of the history of the measured charge of an electron. The first experiment to measure it was Millikan's oil drop experiment, which got a value smaller than current measurements. As other scientists made their own measurements (with different experiments), the measured value slowly increased. What is interesting in this is that we would not expect to see a gradual increase in the observed value. The explanation for this is that when people find a value that was "to high" they would look harder for sources of error that would increase the value, causing a systemic bias to under-report the charge.

Similarly, with the faster than light neutrino, we spent far more effort looking for mistakes that would make our answer bigger than it should have been, which introduces the same systematic bias.

The solution to this is to realize that science is a time consuming process, and it is okay to take a while to arrive at the right answer. But, if we are aware of these problems, we can get there faster.


> When scientist thought they found particles travellig faster than light speed they checked the results, then the equipment, and then they assumed they had made a mistake and asked other people to check the numbers and the experiment.

And as it turns out, it was a mistake after all. Physics is solid to a satisfying number of digits after the comma.


> Some parapsychologists appear to rush to publish weak results and to claim success for flawed experiments.

So do some sciences in the supposedly "better" fields [1].

I think the fact of human nature at issue here isn't specific to any particular field of study.

[1] For a particularly well-known example, consider http://en.wikipedia.org/wiki/Cold_fusion#Fleischmann.E2.80.9...


Possibly true, but I think beside the point. You are holding parapsychologists to a much higher standard than the rest of science (except physics.) That's what the entire article is about really.


Yes, that's the point - the fact that parapsychology passes these criteria throws the rest of science into doubt.


You mean, except physics, chemistry and biology, i.e. sciences that are based on numerous, replicable, measurable, non-subjective experiments. In contrast, economics, psychology, sociology, and even medicine (at least the parts that are not performed in a lab, such as biomedicine or molecular biology) are not really sciences, but merely studies.


Jack Parsons, who was quite into parapsychology, phrased it quite neatly: It's when science becomes closed minded and degenerates into ancestor worship.


I think you accidentally commented in the wrong thread. Gave me a hearty chuckle in this context, though.


Me too, just came from that Raspberry PI thread and thought, oh no my browser has corupted. Good chuckle.


Good god, if only mathematics professors would learn this. If I hear the word 'trivial' one more time I swear I'm going to scream.


There's an old joke that goes something like this.

A math professor was giving a lecture and then remarked that something was trivial. Somebody raises an objection and asks why. The professor stops and thinks hard for the rest of the lecture, then finally remarks: "Yes, I'm right, it is obvious."


I know where you are coming from... but I have started interpreting the word "trivial" (and I don't mean the math meaning of it, e.g. "empty set is a trivial solution") to be a canary of sorts. Once you encounter it and the statement that is supposed to be "trivial" isn't, that means you should backtrack and figure out what insight you are missing to make that statement trivial.

Of course, this kind of depends on the level of the author - some authors don't fully manage to put themselves in the shoes of the reader and take too much knowledge and experience for granted.


As soon as I realized this, it actually became quite helpful.

Saying something is "easy" or "obvious" is useful! It may demotivate if done poorly. However with a proper teaching approach it also signals what SHOULD be obvious. If it's not obvious to you yet, it's important to learn and understand why not, and eventually it should become obvious.


If there's any subject which would benefit from interactive textbooks, it's mathematics.

I'd like a textbook in HTML, where when it says "trivial" I can click and expand it to the proof, with relevant references that I can keep going through until I understand how we got there.


It actually means a specific thing in mathematics:

http://en.wikipedia.org/wiki/Triviality_(mathematics)

(Obvious/easy to prove)


And it's just as specifically subjective.


It would be impossible to spell everything out, though. Then you would have to start every proof by inventing the concept of natural numbers and so on.


While I agree that you can't reasonably implement a complex subject from first principles every time you want to talk about something, I think the core of the complaint was more in frequently calling something trivial when addressing a room full of students with varying ability.

Trivial is, I suspect, best expressed in terms of inferential distance that the student has to cover. Education naturally works on the borders of what people know: too far and showing people something is incomprehensible to them, too close and they're learning nothing they couldn't have found themselves simply by looking at your powerpoint stack.

I suppose, to develop that line of reasoning, it might go something like this:

Given that you're operating on the edge of people's concept space to be teaching them something worthwhile, if you're saying that something's trivial (i.e. that they should find it trivial) a lot, you're either:

A) Wasting people's time, (it's too close to their known concepts)

or

B) Confusing people, (it's too far from their known concepts)

If you're in the goldilocks zone for learning, it shouldn't be trivial. Might it rely on things that are trivial to them? Sure. But I don't see any value in mentioning that they're such, and if you're dealing with varying ability it's worth keeping in mind that some of the things you think are trivial aren't going to be to everyone.

Basically, my question to you, would be: What value is added by calling something trivial to justify the harm to those who don't find it such?

You could after all just not mention that the thing is trivial, perhaps more students would have the courage to ask you about it if they don't understand it that way.


I think you interpret too much into this question. If you do a proof, at some point you have to say something. Either "it's trivial" or "it's obvious" or "it is known" or whatever. Otherwise, how would you stop expanding the steps of the proof?

I also don't think a university course has to hold every students hand. For example if you offer Advanced Calculus or whatever, it is fair to expect students to know that 2+2=4. If some step is too far from a students known concepts, they can either invest extra time trying to catch up (Google is your friend), or they can switch courses.

You could also debate which approach is better for learning maths. In my time, there were comparatively thin books about calculus and comparatively thick books about calculus. The latter spelled everything out. I hated them - too many words really made it hard to focus. I personally much preferred the dense books that left more thinking to the reader.

Maybe for some students the thick books are better, but I don't think a teacher has to please everybody. Students should have the opportunity to switch to another teacher or subject if they can't cope with the current teacher.


> I think you interpret too much into this question. If you do a proof, at some point you have to say something. Either "it's trivial" or "it's obvious" or "it is known" or whatever. Otherwise, how would you stop expanding the steps of the proof?

Just stop when you've got as deep as you care to go. I don't see the necessity to say anything there. If it is trivial, then it's not going to be challenged because everyone will know that they'd lose, and if it's not trivial and it is challenged, then you've identified that at least one of you's going to learn something in the exchange.

> I also don't think a university course has to hold every students hand. For example if you offer Advanced Calculus or whatever, it is fair to expect students to know that 2+2=4. If some step is too far from a students known concepts, they can either invest extra time trying to catch up (Google is your friend), or they can switch courses.

There's failure on both sides if students are being entered for a course that's significantly beyond their ability in that sense.

On the one hand, the student needs to try if they expect to get anything out of it, and persistent focused effort should have equipped them for most of what they can reasonably expect to run up against. On the other hand, the college shouldn't be taking people on who are manifestly unsuited to the subject they're being admitted to.

Perhaps in first year, that's understandable. Testing what people know for admittance, especially given the pitiful standard of secondary education and testing, is a non-trivial task. However, by the time you're getting into second year, if the university has passed them, and they're under-equipped for the second year... well, why the ever loving spaghetti monster did you pass them from first year?

I think we'd probably both agree that both parties in education need to make reasonable effort. If the student isn't willing to try, then there's nothing that can be done. If the college isn't putting its best foot forwards, then the student may as well just be paying for the right to sit the exam for all the value the college is providing.


"Just stop when you've got as deep as you care to go."

Really, I don't think saying "it's trivial" is such a big ordeal for students. In maths at least, they'll quickly learn to get over it. I think in the beginning I was surprised a couple of times. Then I thought about it, and then after a while I realized it is trivial. It is also a pointer for the students so they can see "I should know this".

As for passing students from first term to second term - I am not sure if I care. Why not let students enter any course they wish?


If we're talking about college math courses, then presumably there are prerequisites for any moderately advanced course. So "trivial" would be in the context of students with the appropriate mathematical background to be taking this course.


Its a meta problem. Both math and physics have their own little language which is only vaguely inspired by standard English, and its a popular meme to complain they're just similar enough to cause emotional anguish.

For example a pathological problem doesn't mean you'll get a nasty MRSA infection from working on that problem. At least not directly...


Of course it's subjective. Why is that relevant? Math teachers are teaching a course with certain prerequisites, so in that context "trivial" means something like "it should be easy/obvious for people with the appropriate mathematical background for this class."


Many of the students in the class got through those prerequisites with, say, 70% on the final exam. That means they have a very tenuous grasp on much of the content, and so teaching with at least some overlap with the prerequisite is useful.

The prerequisites may have been taken 6-12 months earlier. Without constant use, a lot of the concepts that were learned in the prerequisites will be lost over that time period. Many classes will indeed start with the first lesson or two as a revision of the prerequisites for this reason. It possibly annoys those who have a firm grasp, but not everybody learns mathematics (or language, or anything else) the same way.


I've never taken a math course that didn't contain some overlap. But you simply can't realistically teach everything from every prerequisite in every class. There may always be some students who snuck through a prerequisite without learning or retaining the info. That's what office hours are for.


Except if you post a comment in a dead thread, your account is practically banned from participating again.


I'm not yet sure how to feel about this, but I have one question that remains unanswered: Will this apply to submissions (possibly in the future) as well?


Not currently.


I have two questions.

1. I'm unfamiliar with the term 'unpacking'. Is it any different from pattern matching in, say, Haskell (but perhaps not as feature-rich)?

2. Aren't slices pretty much a staple in Python? I didn't think using them was considered a 'trick'.


Unpacking is a limited form of what is called destructuring in other languages like Clojure. I would say that, in terms of feature-richness: unpacking < destructuring < pattern matching.


I'd extend that one step further

    unpacking < destructuring < pattern matching < first-class patterns
where first-class patterns are increasingly becoming available in some languages which offer pattern matching (in particular, Haskell will have them soon).


I did a little googling, but am finding it difficult to find good clear information - do you have any articles where I can read about first-class patterns?


https://ghc.haskell.org/trac/ghc/wiki/ViewPatterns

That's the haskell extension.

To see a very nice use of them, check out this paper (pdf) http://strictlypositive.org/CJ.pdf


How do view patterns make patterns "first-class"? To me that means able to manipulate them as values, I don't see how view patterns allow that, they're just syntactic sugar for case expressions.


http://www.reddit.com/r/haskell/comments/1vpaey/pattern_syno...

I spoke too eagerly—the new feature is just named and namespaced patterns. It's a bit of a bump in power, but it's not fully general yet.

For true(-ish) first-class patterns take a look at Prisms in the lens package or some of the other first-class pattern libraries.


I don't know Clojure, but I do know Python, and I'd like to say that unpacking is more flexible than you may realize. For example, you can do this:

  a,(b,c),d = [1,[2,3],4]


I don't mean to belittle Python here, I think its a great language and I find its unpacking useful. I'm only trying to demonstrate that the concept can be (and is in some other languages, like Clojure) taken further to make it more useful still.

I think that Python's unpacking allows most, if not all, of Clojures sequence destructuring for tuples and lists. Clojure takes it a bit further, however, by applying it to all sequences. For example, you could do this:

    a,b,c = "XYZ"
because strings are also sequences. You can also do something like this (excuse the awkward syntax as I try to express it in pseudo-Python):

    a,b : c as d = [1,2,3,4,5]
    # a = 1
    # b = 2
    # c = [3, 4, 5]
    # d = [1, 2, 3, 4, 5]
This might not be so useful in python, since the list already is d and c is simply slicing the end from the list, but Clojure allows you to destructure function arguments: (defn foo [[a, b & c :as d]] ...) when passed the above list (foo [1 2 3 4 5]) would bind the variables as shown in the above comments.

Where destructuring really shines, though, is that you can destructure maps (dictionaries in Python) and vectors can also be treated as maps (their keys are the indices), so you can do stuff like this:

    a, {[b {:keys [c, d]} :foo}, e = [1, {foo: [2, {c: 3, d: 4}, bar: 9}, 5]
    # a = 1
    # b = 2
    # c = 3
    # d = 4
    # e = 5

http://clojure.org/special_forms#binding-forms


neat!


Tuple unpacking:

  t = (1, 2)
  a, b = t
  # a == 1 and b == 2
Using slices is normal with lists


Cool like this,hope implement in php. No need to explode


PHP has the list construct which is a little uglier, and has a couple of odd behaviors

    php > $t = array(1, 2);
    php > list($a, $b) = $t;
    php > echo "$a, $b";
    1, 2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: