RailsCasts helped me so much in getting started with Rails in the late 2000s. It opened my eyes to how one can effectively structure a web application in general, from caching to authorisation. Ryan’s explanations were concise yet accessible; it felt like a colleague showing me cool stuff.
I figured it must have been popular, but wouldn’t have guessed as high as $1M revenue. Looking forward to reading the rest of this series.
This release is particularly interesting because it includes the sidebar patch, which has existed in various forms for about 10 years [1].
Mutt has generally been quite conservative in accepting patches. NeoMutt [2] is a project hoping to kick-start development on the Mutt project, by being much more willing to accept contributor patches.
The repository [3] is becoming more and more active; help is always greatly appreciated!
I use LaunchBar and didn't know about this; cool! For those who also didn't know about it: it's the 'Instant Send' feature under 'Shortcuts' in the preferences.
It doesn't seem to work for words selected in the terminal, though, like in vim, or a scrollback buffer in tmux or less (you can click-and-drag to highlight things, but I mean highlighting in the apps themselves, like `viw` in vim to select the word under the cursor). That would be very nice.
I am a particle physicist, and used to use ROOT every working day. It is still used daily by thousands of other particle physicists, though, and is a core part of many high-energy physics experiments.
I think there are a few of objectively neat features of ROOT:
* Versioned persistency of C++ objects deriving from the TObject base class [1];
* Script-like execution of C++ and a C++ REPL based on clang [2]; and
* Dynamic bindings of the C++ classes to Python [3].
There's an accompanying, but independently developed, file access protocol for reading and writing ROOT files over a network, too [4].
On the other (subjective) hand, ROOT is regarded a pain to use by ‘analysts’, the people who use ROOT to make the results that go in to physics papers. There are already some good, old-but-still-valid critiques [5, 6], so I won't say too much, but I think a large part of the problem comes from two things:
1. ROOT tries its best to do everything that a particle physicist might want to do. This encompasses a very wide range of things, and this has lead to ROOT having a very large, often intractable codebase that cannot be modularised.
2. It has failed to keep up with contemporary coding techniques and analysis methods. Most of the PhD students I know use the Python interface to ROOT, and yet the ROOT developers are planning to drop Python support for the next major version (ROOT 7, which is expected in 2018). Those that do use C++ aren't able to use even C++11 effectively with ROOT, as its interfaces aren't compatible.
Luckily, I'm confident that analysts will move to a better way. I've been very encouraged by the astrophysics and machine learning communities in particular, who are using Python to do low- and high-level analysis on large datasets, as we do in particle physics, and are producing fantastic results. Tools like pandas, matplotlib, and scikit-learn are an absolute joy to use in comparison with ROOT, and the communities within the Python ecosystem are wonderful: they foster very open code development, and value readable, well-documented, fast code.
I don't need ROOT to get any better, because I think the future is already here.
* HEP stores about 0.5 exabytes of data in ROOT format, that's almost exclusively serialized objects that do not know anything about TObject.
* XRootD is not really specific for ROOT files. A better example would maybe be our JavaScript de-serialization library, https://root.cern.ch/js/
* No way will the python binding be dropped. I wonder where you got that rumor from. About one third of our users is using it.
* HEP is limited by CPU resources, which is part of the reason why HEP decided to use a close-to-bare-metal language for the number crunching part.
* We just made the use of python and R multivariate analysis tools with ROOT data more straightforward.
* We have people from genomics etc coming to ask for help, because they cannot find a system that scales as well as ROOT does.
And then we have a different perception of the direction out there. I see that Hadoop was nice but slow, Spark is nice but slow, so now things are moving to C++, see e.g. ScyllaDB. There is no reason for us to move away from it, but every reason to make it more usable.
And yes, I agree that this is an issue. But many physicists do not.
* ROOT files still have terrible documentation. Rene throws up his arms in protest anytime people say this (I've personally witnessed this)
* Physicists still don't like pyroot interfaces, otherwise rootpy wouldn't exist.
* astropy is proof that you can be performant and user friendly. Julia is proof that you don't even need a C++ library underneath.
* Saying ROOT scales well is weird; It is true that ROOT and the ROOT IO/ROOT files are efficient, but it needs but additional services have helped it scale (dCache, XRootD, batch farm/grid/DIRAC, etc...)
* Not sure what the ScyllaDB tangent has to do with anything. There are scalable open source RDBMS options out there too like CitusDB, Greenplum which support UDFs. Hadoop and Spark with HDFS are still great for certain applications, and as general data analysis tools are great, but it's tricky to really get them to perform well without HDFS and the grid model of computing doesn't lend itself well to that paradigm.
* I've heard the C++ interpreter is much better with Cling (if that's you, I applaud your effort!) CINT was a gun that fired in both directions for every grad student I ever had to help.
* XRootD has little to do with ROOT anymore other than it also implements the original root protocol.
* ROOT is not modular. It is both an application and a collection of libraries and somewhat of a VM. That does make some things convenient, but it also makes some things extremely hard.
There are many reasons to move away from ROOT, and the astrophysics community is a prime example of that!
Thanks for clarifying. You're right that I was too broad, and it's certainly true that many physicists don't share my opinion (I'm working on that).
Speed is always a concern, but I don't think it dictates that C++ should be the primary ‘user-facing’ interface. Numpy is fast, but it doesn't sacrifice a nice API to achieve it.
Personally, a big difference is that a lot of the Python packages feel fast to use and, most importantly, to write. ROOT can be fast to execute, no question, but I feel like I'm fighting against it (and I'm sorry that's very vague and qualitative).
It would be very interesting to hear more about the genomics use-case, and how they evaluated the other options.
The thing that bothers me most about root is that some parts of it are basically not maintained at all.
There are serious bugs in RooFit which haven't been fixed in years. Wouter Verkerke has abandoned it (from what I can tell). Lorenzo Moneta is fixing the worst potholes, but it seems is has no authority or no time to tackle the misguiding interface and the broken scaffolding of RooFit.
Maybe ROOT7 will be a chance to take ownership of RooFit again.
Have there been any success stories in regard to genomics and ROOT? About 10-15 years ago the group I was with then explored ROOT as the alternatives (Perl, early versions of R, etc.) weren't very attractive. We didn't end up going with ROOT ourselves for a variety of reasons, but did anyone else in the field do so?
Sure, but what if you have more than another section that should be styled differently? Classes can help differentiate same-name elements with different contexts, so you don't need unwieldy structure-specific CSS selectors like
You'll also get major specificity issues when using element selectors with other class/id type selectors. My rule of thumb is classes and only classes for styling, it really makes life a lot easier. (Even if you have a unique element on the page, don't use the ID in your selector to style it.)
Totally agreed. And more generally, specificity is by far the most difficult challenge in large-scale CSS. Without discipline and some well-defined conventions for selector construction, you will end up with the equivalent of spaghetti code and late-nights deciphering unexpected cascading from other people's styles.
Since large projects will inevitably have to rely on non-semantic HTML and class names anyway (e.g. to differentiate between sibling <p> tags), a simple rule of thumb would be to only use class names in selector construction. HTML elements can still be semantic for other purposes, but the CSS should not care about it.
The other half of the specificity problem is nesting. I advocate strongly against the descendant combinator ` ` in favor of the child combinator `>`. E.g. `.body > .content` is much more robust than `.body .content`. However, an alternate approach would be:
I have as a rule that if you are in a situation where there can only be 1 element with those properties (it would make no sense at all to have more), I should use the id.
The top elements are one example. It makes no sense to include them inside any other element, and if that ever changes, all the styling will need to change anyway.
> I have as a rule that if you are in a situation where there can only be 1 element with those properties (it would make no sense at all to have more), I should use the id.
Not a good idea: IDs take massive priority over classes in the cascade.
As someone who often has to override an enormous stylesheet which is all styled to IDs, this is a serious pain point.
What if you had a requirement to create variation styles of those elements? How would you do that? Would you add you base style in your id selector, and then variations with classes? Again, you'll face specificity issues with that approach. Keeping things simple is the best approach here IMO. I've done a heck-load of CSS and have seen how hacky a stylesheet can become because of specificity issues. Sticking with classes and keeping selectors short will make your stylesheet easier to follow and more maintainable, but this is just my approach!
Agree that structure-specific CSS is unwieldy. However, it's robustness is still a worthy benefit. If you use a white-space sensitive CSS preprocessor, it is not unwieldy, and actually quite maintainable and elegant, since it's structure reflects nicely-formatted HTML. E.g.
main
> section
> main
text-decoration blink
Thus, things that are easy to change in HTML structure (cutting, pasting, and changing indentation) are similarly easy to change in CSS.
As it says in the article, this is the discovery of parity, P, violation in a system where it's not been seen before. In a "nice" universe, one might expect mirror images to behave the same, but here they've discovered that interactions between electrons and quarks are different depending on the spin of the electron.
So, it's related to CP violation in the sense that it something "violates CP symmetry" if it's behaviour changes when you flip the charge and the parity of the system simultaneous. Here, they only flip the parity.
I figured it must have been popular, but wouldn’t have guessed as high as $1M revenue. Looking forward to reading the rest of this series.