Hacker Newsnew | past | comments | ask | show | jobs | submit | saamyjoon's commentslogin

JSC definitely has things resembling isolates.


links? I'm curious to learn more


You're absolutely wrong here. A language must specify its semantics. Those semantics describe the safety properties of that language.


It's implementation dependent. Most implementations JIT Wasm up front (I think Chakra is the only engine that interprets). So, JS loads faster because almost all engines interpret first (all?).


I think the primarily-AOT compilation strategy we see today is a consequence of many of the initial workloads being frame-based animation where AOT avoids animation stutters. But this is likely to evolve over time as wasm workloads evolve. If and when we see wasm showing up in frameworks in contexts where page-load is the primary concern I can see a more lazy JIT strategy being the right thing and then we'd want to specify some sort of developer knob to control this. But given a pure-JIT approach like Chakra is using, wasm should be able to load code byte-for-byte faster than JS.


WebKit already has a more lazy JIT strategy. From the article being discussed:

"WebKit’s WebAssembly implementation, like our JavaScript implementation, uses a tiering system to balance startup costs with throughput. Currently, there are two tiers to the engine: the Build Bytecode Quickly (BBQ) tier and the Optimized Machine-code Generator (OMG) tier. Both rely on the B3 JIT as their low-level optimizer."


I agree.

Chakra uses a pure JIT approach? I thought they interpret both in JS and Wasm?


Maybe some terminology difference here but what I meant by 'pure JIT' was that, iiuc, Chakra waits until the function is called to even validate it, then warms up in an interpreter and only if the function is hot compiles it in a background thread. I think that's one end of the eager/lazy spectrum that may well be what a category of wasm uses will want.


Is “interpreting” necessarily faster?

Consider also this is perhaps a false distinction. V8 always produces machine code when running JS IIRC. And that code is JITed doesn't mean the entire module is.


V8 used to do this. Now they have a JS interpreter called Ignition.


> V8 always produces machine code when running JS IIRC.

They used to. They stopped doing that because it made initial load too slow...


I agree too. I wonder if this has been discussed at TC39.


FWIW, I don’t think this is accurate (not that it really matters much anymore). All the browsers were in the 90+% support around the same time.


How so? Go to: http://kangax.github.io/compat-table/es2016plus/ In STP or WebKit nightly


I think OP meant 2015 is two years ago, claiming it makes it look behind.

Truth is, ES6 is what we've called it, ES2015 is to confusing cool new name!


>> ES6 is what we've called it,

Who is "we"?


The WebKit team.

You like that question! Third time in this thread already


It seems reasonable for people speaking on behalf of a project to identify themselves as speaking on its behalf. The question has turned up three different people not divulging that information so far, so I think it's done the job.


We're not speaking on behalf of WebKit, merely as members of the team explaining the team's approach. Neither are we trying to hide as "not divulging" implies.


Our plot to express opinions about ES6 vs ES2105 as names without explicitly disclosing our affiliation in every post has been unmasked!

In before: who is "our"?


We like using the element of surprise: https://www.youtube.com/watch?v=fG1TK6PdeVM


I wasn't implying deception, but if "we" means "the WebKit team" that would seem to be speaking on behalf of the WebKit team. The clarification was useful, that's all.


The Type Profiler and Code Coverage Profiler are all part of WebKit, not Safari.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: