It may be new but it is not a well-thought design.
110+ characters per line is hard to read and ugly [1].
Pure black on white has been discussed many times. While it satisfies min contrast requirements, it feels unnatural, and again, hard to read, causing eye strain [2][3]. Although this is not an issue on screens with automatic brightness adjustment, they are sadly not everywhere and it is wiser to target an average screen considering Mozilla auditory.
> Pure black on white has been discussed many times. While it satisfies min contrast requirements, it feels unnatural, and again, hard to read, causing eye strain…
I completely disagree with this. Consider the failure modes of erring in either direction:
- Too much contrast? Reduce screen brightness. (This has the beneficial side effect of increasing battery life on mobile devices.)
- Too little contrast? Uhh… squint more, or try to find software that lets you override the designer's intent.
Most people aren't using professional screens in low lighting, and not everyone has young eyes. Lower contrast can be a deal-breaker for these people. Even for those with perfect vision, low contrast can be incredibly annoying. It's so frustrating to view a screen in direct sunlight and be unable to see content because a designer didn't want to cause eye strain.
A side note: Amusingly, the first thing I did when looking at your second source (http://ianstormtaylor.com/design-tip-never-use-black/) was use my reader mode extension. I disliked the off-black sans-serif font.
I would prefer if the background color were something other than pure white, though, or even better, if a night mode (light on dark) toggle were offered. Staring at large expanses of white is uncomfortable for my eyes.
That's exactly what I do on my personal website. At the bottom of every page is a button which inverts the color scheme to white text and black background.[1] It sets a cookie, so the setting should persist across all pages.
>> Pure black on white has been discussed many times
>I completely disagree with this
Pure black on white has been reccommended against for some time, typically you'd use dark grey on a pure white bg
>Note: for low resolution screens, an overly strong contrast (full black and white) is not ideal either, as the text starts to flicker. Benchmark: #333 on #fff.
Thanks for the feedback! So, one thing we are doing differently this time around is updating the site incrementally instead of a big bang redesign. In the case of the article pages that means limited changes in the first phase to get them to conform to the new brand identity. The number of characters per line is an issue though. As stated in the blog post, we'll address the article page layout in the next phase. Article pages are where MDN users spend most of their time, we want to make sure we get those right, so we are setting aside time to focus on just them.
> 110+ characters per line is hard to read and ugly [1].
It's not that straightforward. While shorter line lengths may be preferrable for reading bodies of text[1], API pages are rarely read top to bottom. Reducing the number of characters per line makes less content visible on screen, necessitating extra scrolling and slowing down visual searching.
> It may be new but it is not a well-thought design. 110+ characters per line is hard to read and ugly
Er, not sure if you're looking at the before and after screenshots the wrong way round? The new design appears to reduce the number of characters per line from ~100 to ~80 (by increasing the body font size).
Compare first line of Array#slice in the old design:
"The slice() method returns a shallow copy of a portion of an array into a new array object selected"
vs the new:
"The slice() method returns a shallow copy of a portion of an array into a new array"
I was checking the blog page which has adopted the new design. MDN pages are already responsive but the line length goes far beyond that figure in some sections. The screenshot is at reduced window size.
I don't mean to be all, "ugh," but syntax should be at the top. Looking at that shot, as opposed to the one in the blog post, it will be common for syntax sections to fall off the bottom of the screen, "below the fold." This will make it less useful than sites that don't require the user to scroll to get the most basic information, such as method signatures, and MDN will continue to be a second choice.
OK, that all got a little sharp, but I do wonder what "design language" (per the blog post) is steering their information design.
I agree, but I don't think going through and reworking any actual content of the articles (these are all essentially wiki pages) is in scope for this stage of the redesign. 'atopal mentioned elsewhere in this thread [0] that they're doing this incrementally, and the next phase will be dedicated to article page layout.
You might want to mention this on the accompanying Discourse feedback thread [1] so it's noted for the next phase.
Is putting adds in hacker news comments allowed? That practicaltypography link leads to http://practicaltypography.com/graylist.html, has no content and asks to pay for something.
You got me curious enough to check it out. Ad or not, that is one of most incredible sites I've seen with deep resources that are very valuable. I learned a lot in the 20 minutes there. The 'interstitial' is just that -- a plea for donations to support the writer and hosting for an ad-free (it's ad-free for everyone, not just donors), high-quality resource. I had to read the page 3 times before I picked up on the instruction to input the root URL into the address bar, instead of being linked there. Haven't looked at the code, but is he just looking at referrers and dynamically routing based on being on his "graylist"?
If I find myself there more, I'd be inclined to send him something.
As to its effects, I think he'd prefer to drive off any and every reader who is offended by the ask. They are just costing him bandwidth with zero chance of any return. Maybe they share it with someone who ends up donating, but people who actually donate are much more inclined to be good sources of referrals to other people who may donate.
As to whether it equates to posting a link to an ad or promotion, I really don't think it crosses the line. This isn't one of those informercial sites that lead you on for an eternity without ever providing anything of substance. Instead this is just an interstitial on steroids meant to drive off undesirable traffic. It's 1000 times better than annoying ads and/or anti-blocking measures, imo.
There could be valuable content there, but I will newer know that because I followed a link in comment, but all I got is an ad asking me to pay for something.
Wow. I wonder if that page actually has the intended overall effect. I've never seen this site before, but the actual content seems useful and (perhaps unsurprisingly) well presented. That said, hijacking me to another page to condescendingly nag me (and not even including a link back to the content I was originally linked to) really puts me off. I really think that the kind of people the author claims to want money from would be better reached by simply making it easy to donate.
Happy to see the new design looks similar to the old design, not an experimental new UI that removes half the information or puts everything behind things you need to expand or similar!
I always favor MDN over w3schools results when searching for a javascript, HTML, DOM or CSS property :)
Wanted to say the same... It's in a similar vein to how long expertsexchange remained in search results after stackoverflow sites took hold. I usually try to click the MDN links in search results without clicking the w3schools links on accident.
Though lately, caniuse, node.green and similar seem to be my bigger searches. It's hard to keep up with some of the cross supported JS features. MDN helps a lot with syntax/usage, and it's way better than any alternatives I've seen as a pure JS resource. I do wish they'd expand the browser versions supported beyond just which browsers are supported a bit more though, then they'd be my first for almost everything.
w3schools has a special DNS entry to prevent access, for me and my interns. Now that I say that, I feel like those bigcorps which prevent access to Facebook.
I like how they're using the logo font for the headlines. As someone who already knows the new logo, this helps me connect MDN to the Mozilla brand more intuitively.
However, I'm just as concerned about the huge font sizes and high font weight as the majority here. It really distracts from the actual content.
You know I saw that font last night near the top of google fonts and didn't realize it was short for Mozilla. I generally prefer sans-serif fonts, but I really like the Regular variant!
I've posted the link to this thread over there along with a couple main points from the comments here so far, but if you really want to make sure your input is heard, you might want to hop on over there to give it. You can log in with a GitHub or Google account if you don't feel like signing up with your email.
That giant font works for Array.prototype.slice but what about CanvasRenderingContext2D.prototype.createRadialGradient or WebGL2RenderingContext.prototype.getActiveUniformBlockParameter?
that's inconsistent with the other Array docs. The correct function is WebGL2RenderingContext.prototype.getActiveUniformBlockParameter() if you're trying to be consistent
Array.prototype.slice is specced out by the JS standard. getActiveUniformBlockParameter is part of the DOM. The DOM is specced in terms of interfaces—the only thing that the DOM guarantees is that when you're interacting with an object that satisfies the interface, it will have a getActiveUniformBlockParameter method. WebGL2RenderingContext.prototype.getActiveUniformBlockParameter is a JS-ism and isn't essential to the implementation of the spec.
what? it's required that setting WebGL2RenderingContext.prototype.getActiveUniformBlockParameter to some other function work. You set it before object creation. Objects created indirectly get the new functionality through their prototype. This is true for all DOM objects. maybe I don't understand your distinction
> Objects created indirectly get the new functionality through their prototype
Except most parts of the DOM in most engines is implemented in C++, not JS.
<whatever>.prototype is a JS-ism. But the DOM is not defined in terms of JS. It's defined in terms of a language-agnostic set of interfaces. So when poking around and seeing things like NodeIterator.prototype within JS, you're seeing them because the browser is presenting it to you in a way that kind of resembles the way things work for that runtime. <whatever>.prototype (when <whatever> is some DOM interface) is a byproduct of that behavior. But that those interfaces get implemented is the only essential characteristic of the DOM, and the docs should reflect those interfaces, not the weird and tangential byproducts of how the DOM gets projected into a JS runtime.
I'm not understanding the distinction from a JS programmer's perspective.
If I have
a = {}
a.foo = () => ();
foo is a member of a.
Where as if I have
class A {
foo() {}
}
a = new A();
foo is a member of the prototype for class A. This is important because I need to know if replacing A.prototype.foo will effect all new As or none. Documenting the function is defined on the prototype tells me this. So, I need both Array.prototype.slice and WebGL2RenderingContext.prototype. getActiveUniformBlockParameter documented the same.
If WebGL2RenderingContext.prototype.getActiveUniformBlockParameter is only documented as WebGL2RenderingContext.getActiveUniformBlockParameter that suggests to me that I can't do this
The first is far more useful because I can effect it indirectly. The second requires me to modified code at creation, code that might be in a 3rd party library.
So, if .prototype. is left out of the docs for WebGL2RenderingContext.prototype.getActiveUniformBlockParameter that suggests to me I can't do the first. Seems like those 2 things Array.prototype.slice and WebGL2RenderingContext.prototype.getActiveUniformBlockParameter should be documented consistently. In other words, a JS programmer does not care about implementation details. They don't care one object is a DOM element vs a JS Object (at least not in this case). They care how they can use it in a program. To do that they need to know is the function on the prototype or on the object itself and documenting as Class.prototype.func tells them that.
> foo is a member of a [...] foo is a member of the prototype for class A [...] they need to know is the function on the prototype or on the object itself and documenting as Class.prototype.func tells them that
I know how prototypes work, and I know why the `Array.prototype.slice` are documented that way, because I wrote those docs—and caught a bunch of flak along the way when making the move away from referring those and other methods in the form `Array.slice`—in 2007.
It's hard to address the issue using your specific example, because it's so contrived. (Who is replacing native implementations [don't], and why? [Once again: don't.]) Here's the actual motivating factor for why you see stuff like `Array.prototype.slice` documented that way:
There are methods `fromCharCode` and `charCodeAt`. However, you call the former as `String.fromCharCode` and the latter as `x.charCodeAt`, where `x` is some string instance. That's an important distinction, which means it's important that we not refer to them as `String.fromCharCode` and `String.charCodeAt`. The former is a real method that actually existed, and the latter is something that doesn't exist Which means if we tell readers "Use `String.charCodeAt`", then what we're doing is giving readers bad, confusing, and possibly frustrating information—not what you want when your goal is to be explaining things to an already unsure or simply ignorant audience. This distinction only became more important with ES5, since it started adding things like `Object.keys` and `Object.defineOwnProperty`, rather than making those methods available to all instances by adding them to the prototype.
But this is all besides the point, because we're talking about the DOM here.
> I'm not understanding the distinction from a JS programmer's perspective
That's the problem. Because what we're discussing is a reference for people using the DOM, and once again your changes would force a bunch of JS-isms into the scope of the documentation, and not only that, but a bunch of shaky, not-at-all-well-understood-or-agreed-upon, cobweb-covered parts of how the underlying objects get projected into JS.
> In other words, a JS programmer does not care about implementation details.
You understand that the thing you're asking for are that implementation details surface through the docs, right? And that they should be a prominent feature? That's what you're asking for.
I'm pretty tired of having to zoom way in on every website I visit just to get text to a readable size. I think the size on these pages looks and reads great. I can count on one hand the number of times I've arrived at a website and had to zoom out because the text was too large.
Even Hacker News I have permanently zoomed to 125% and sometimes find it to be too small.
No, the grandparent comment resonates with me. I have good eyesight (when last tested, four years ago, it was better than 20:20) and use monitors with standard DPI on standard resolutions, and yet it suddenly strikes me that never in my life have I had to zoom out on a web page, and yet I frequently zoom in (including on Hacker News). Standard font sizes really are too small.
Not sure how much is age... I notice that reading my phone is often harder than desktop. I usually have a set of reading or computer glasses with me, but don't need them except for things that are close. Stuff on the phone always seems a bit small. Maybe there should be an effort to consult designers over 40 on some of these web sites/apps.
For Hacker News specifically: Verdana at 13px (12px for body text) was a fine choice at 72 dpi.
HN didn't exist in the 1990s, but its design principles are from that era. The look itself is a nice throwback, but the type size follows a technical constraint that no longer makes sense (it only made sense when CSS was nonexistent or very limited).
Do you know that this is the case? I agree that some brand's attempts at minimalism center around removing features, but MDN's example page doesn't appear to have major dimensional changes or any obvious content absences.
Plus, improving typography and cohesiveness of design language can aid speed of navigation, when done carefully.
That said, I did think MDN's current design was both usable and modern, so it wasn't high up on the list of sites I was hoping for a refactor of...
The only thing I'd like to see different there would be a slightly smaller font size on 'Syntax', seems a bit large. Other than that I like the new design.
The increased font size and more pronounced page hierarchy make it easier for me to grok the page structure at a glance and navigate to the section I need (since I'm probably not reading the entire page front to bottom if I'm at an MDN link).
Why does all information need to be above the fold? It's not like I can interpret everything on the page in the 1000ms necessary it would take me to scroll.
Below the fold arguments only make sense when you can't grasp the purpose of the page without scrolling.
You're not supposed to cram every piece of information you can above the fold. The information that is getting cut off in this case is hardly pertinent to the understanding or navigation of the site.
Their image shows that the information density is basically unchanged. That was the point they were trying to make, as a refutation to the grandparent's complaint: "And less actual content, forcing you to scroll or zoom out."
Except that MDN isn't your hipster startup website with a 20MB hero image that needs to get its point across immediately. You're never going to end up on MDN and say "I have no idea what this is". You're there because you already know what's under the damn fold.
Thanks for convenient comparison. Most significant differences from the first sight:
- Reduced / removed breadcrumbs: not OK.
- Inverse text/background: not OK in general.
- Removed "last contributors": not necessary, IMO neat "community" perk.
~ Increased colour contrast: not necessary.
~ More aggressive headings and sidebar navigation: not necessary.
+ Preserved Dino: phew, I'd miss him.
+ Search (bar?) at the intuitive place: YAY, FINALLY!!1!
And unrelated, the new Mozilla logo: still cannot get used to it; seems like the calligraphic humble old one was vandalized by some 1337 H4X0RZ :|
I think the difference is negligible. It's not as if MDN is serving up single page resources as it currently stands.
Is scrolling really the big of a barrier for information? It seems like a fair trade off to me. They're current design tends to blend together for my eye when scanning and is difficult to "jump" to the sections I'm looking for.
That's not the problem with MDN. MDN's big problem has been a confusion of out-of-date versions of documentation. It doesn't help that they've tried hard to get developers to use Mozilla technologies, from Jetpack to Firefox OS, which were then abandoned.
Firefox's drop to 15% market share has a big effect on developers.
I've watched my add-on usage drop in lockstep with Firefox's market decline.
This is addressed in the previous post in the series [0], linked at the top of this one. MDN is refocusing on exclusively providing solid, up-to-date web documentation, and all the old, obsolete, and Mozilla-internal stuff will be trimmed away or moved elsewhere.
I don't mind the fonts nearly as much as the size seems about 150% of the size it should be for the title and about 130% on the syntax... should reduce them down a little.
That logo in the upper left needs some padding love. It looks like someone just found out about CSS background-color. A few pixels of padding would go a long way.
There is such a thing as a too big font. I think it happens when you feel like you physically have to move back from the screen to comfortably read the text.
What a genius idea, design only for people with expensive screens, make software only people with a dual Xeon and a GTX1080 can run, and be sure to make the minimum resolution 2560*1900. You're sure to make friends.
Welcome to reality, most people use cheap $20 chinese screens.
And so the solution is to create content for people on 320x480 pixel screens with no contrast? No.
We should try and create the content in the best possible formats, with the highest available standards, and then the user agent, acting on behalf of the user, should scale that to the user’s current system, increasing or decreasing the contrast.
110+ characters per line is hard to read and ugly [1].
Pure black on white has been discussed many times. While it satisfies min contrast requirements, it feels unnatural, and again, hard to read, causing eye strain [2][3]. Although this is not an issue on screens with automatic brightness adjustment, they are sadly not everywhere and it is wiser to target an average screen considering Mozilla auditory.
[1] http://practicaltypography.com/line-length.html [2] http://ianstormtaylor.com/design-tip-never-use-black/ [3] https://ux.stackexchange.com/questions/23965/is-there-a-prob...