It's kinda frustrating that Mozilla's CEO thinks that axing ad-blockers would be financially beneficial for them. Quite the opposite is true (I believe) since a ton of users would leave Firefox for alternatives.
The whole web ecosystem was first run by VC money and everything was great until every corner was taken, the land grab was complete and the time to recoup the investment has come.
Once the users were trapped for exploitation, it doesn’t make sense to have a browser that blocks ads. How are they supposed to pay software salaries and keep the lights on? People don’t like paying for software, demand constant updates and hate subscriptions. They all end up doing one of those since the incentives are perverse, that’s why Google didn’t just ride the Firefox till the end and instead created the Chrome.
It doesn’t make sense to have trillion dollars companies and everything to be free. The free part is until monopolies are created and walled gardens are full with people. Then comes the monetization and those companies don’t have some moral compass etc, they have KPI stock values and analytics and it’s very obvious that blocking ads isn’t good financially.
I agree with the untrue and revisionism bit, but I disagree with it being the opposite of what happened.
People were trying to figure out how to make money off of the Internet from the early days of the Internet being publicly accessible (rather than a tool used by academic and military institutions). It can be attributed to the downfall of Gopher. It can be attributed to the rise of Netscape and Internet Explorer. While the early web was nowhere near as commercial as it is today, we quickly saw the development of search engines and (ad supported) hosting services that were. By the time 2000's hit, VC money was very much starting to drive the game. In the minds of most people, the Internet was only 5 to 10 years old at that point. (The actual Internet may be much older, but few people took notice of it until the mid-1990's.)
> People don’t like paying for software, demand constant updates and hate subscriptions.
Yes, No, Yes?
I don't demand constant updates. I don't want constant updates. Usually when a company updates software it becomes worse. I am happy with the initial version of 90% of the software I use, and all I want is bug fixes and security updates.
GP wasn't differentiating between different types of updates in their argument, because it doesn't make sense - they're discussing the economics of it, which doesn't care if you're fixing bugs or not.
>> How are they supposed to pay software salaries and keep the lights on? People don’t like paying for software, demand constant updates and hate subscriptions.
I suspect then it doesn't matter whether Mozilla kills itself or not. You should be fine with the current release of Firefox. Maybe you'd lose the installer, so all you have to do is put it somewhere safe and you're good.
while i may agree with the first line, rest are little skewed perspective.
> People don’t like paying for software, demand constant updates and hate subscriptions.
hate subscription?? may be. if it's anything like Adobe then yes, people will hate.
that constant update, is something planted by these corporates, and their behavior manipulation tactics.
People were happily paying for perpetual software, which they can "own" in a cd//dvd.
People weren't happily paying, there was huge pirate business that was run on porn, gambling ads and spyware revenue. Then there were organizations with lots of lawyers paid by the "pay once use forever" companies to enforce the pay part because people didn't want to pay.
One time fee software ment that once your growth slows down you no longer make money and have plenty of customers to support for free. That's why this model was destroyed by the subscription and ad based "free" software.
The last example is Affinity which was the champion of pay once use forever model, very recently they end up getting acquired and their software turned into "free" + subscription.
It wasn’t one time fee though.
The one time fee bought a copy of the software and its patches.
A couple of years later a new version would come out and people had the choice between keeping using the old version or buying the new one.
To convince people to buy they had to add genuinely useful features. I would have bought a new version with new features and better performance. I wouldn’t have bought a new version same as the previous one with AI crammmed in it
The long tail of the web, likely consisting of mostly small or noncommercial sites, are currently numerically huge but individually low traffic. Meanwhile, user attention is dominated by a relatively small set of commercial and platform sites.
That was ine inception age when very few people were online, its not the stage of mass adoption. The mass adoption starts with the dot.com era with mass infrastructure build up.
But sure, if you think that we should start counting from these years you can do that and add a "public funded" era at the beginning.
I came to the web after dotcom and most of the content (accessibke trough search) was blogs and forums. It wasn’t until SEO that fake content started to grow like weeds.
There were companies that were making some money but those were killed or acquired by companies that give their services for free. Google killed the blogs by killing their RSS reader since they were long into making money stage and their analytics probably demonstrated that it is better people search stuff than directly going to the latest blog posts.
It's the same thing everywhere, the whole industry is like that. Uber loses money until there's no longer viable competition then lose less money by jacking up the prices. The tech is very monopolistic, Peter Thiel is right about the tech business.
The existing online mass is what attracted the VC in the first place, same as it ever was. It was mostly privately funded and very much a confederacy (AOL vs Prodigy vs BBS) at the time, much like now.
If a time comes when there are zero free browser with effective ad-blocking, it will create space for a non-free browser that does it. It would create a whole ecosystem.
I currently pay zero for ad-blocking (FF + uBlock Origin) and it works perfectly; but I would pay if I had to.
I think they are trying to balance it between making as much as money possible, risking being sued for monopolistic practices and risking exodus. Microsoft once overplayed their hand and the anger and consumer dissatisfaction was so strong that people left Internet Explorer en masse.
So the best situation for google would be to have borderline monopoly where they pay for the existence of their competition and the competition(Firefox) blocks adblockers too by default but leaving Chrome and Firefox is harder than forcing installin adblockers through the unofficial way.
So basically, all the people who swear they never clicked ads manage to block ads, Firefox and Chrome print money by making sure that ads are shown and clocked by the masses.
The only reason Mozilla matters in the eyes of Google is because it gives the impression there's competition in the browser market.
But Firefox's users are the kind who choose the browser, not use whatever is there. And that choice is driven in part by having solid ad-blockers. People stick with Firefox despite the issues for the ad-blocker. Take that away and Firefox's userbase dwindles to even lower numbers to the point where nobody can pretend they are "competition". That's when they lose any value for Google.
Without the best-of-the-best ad-blocking I will drop Firefox like a rock and move to the next best thing, which will have to be a Chromium based browser. I'll even have a better overall experience on the web when it comes to the engine itself, to give me consolation for not having the best ad-blocker.
It might be financial beneficial once as an up-front payment,
but long term, as others have mentioned, really not good for the project to remove the only feature that gives firefox a defensible way to fill it's niche in the market.
> Quite the opposite is true (I believe) since a ton of users would leave Firefox for alternatives.
Yes but keep in mind that’s not an individual problem that is solved by switching browsers. If a browser engine dies, the walls get closer and the room smaller. With only Chromium and WebKit left, we may soon have a corporate owned browsers pulling in whatever direction Google and Apple wants. I can think of many things that are good for them but bad for us. For instance, ”Web Integrity” and other DRM.
I think people like to imagine it's not viable because the most commonly known adblocker refuses to release the version for it. Negative news somehow stick better.
Fortunately it's not the only one and for example Adguard works perfectly fine.
Maybe, maybe not. It's getting dangerously close to the modern day IE, where some websites just don't work right and everyone has to do arcane shit to make their websites cross platform.
It's also a closed source browser developed by Apple. It's not competing with Firefox. Everyone contemplating switching to safari over Firefox are not being honest - they're not even on the same playing field.
> It's getting dangerously close to the modern day I.E.
This line gets thrown around a lot, but if you look at the supported features, Safari is honestly pretty up-to-date on the actual ratified web standards.
What it doesn't tend to do is implement a bunch of the (often ad-tech focused) drafts Google keeps trying to push through the standards committee
The only way you can possibly view Safari as "the modern day IE" is if you consider the authoritative source for What Features Should Be Supported to be Chrome.
You should probably think about that for a bit, in light of why IE was IE back in the day.
> The only way you can possibly view Safari as "the modern day IE" is if you consider the authoritative source for What Features Should Be Supported to be Chrome.
No. Safari is the modern IE in the sense that it's the default browser on a widely used OS, and it's update cycle is tied to the update of the OS itself by the user, and it drags the web behind by many years because you cannot not support its captive user-base.
It's even worse than IE in a sense, because Apple prevents the existence of an alternative browser on that particular OS (every non-safari OSes on iOS are just a UI on top of Safari).
But this can only be by comparison to something. And Apple is very good at keeping Safari up to date on the actual standards. You know—the thing that IE was absolutely not doing, that made it a scourge of the web.
So if it's not Chrome, what is your basis for comparison??
> But this can only be by comparison to something.
The something being the other browsers. Chrome and Firefox. Safari was even behind the latest IE before the switch to Chromium by the way.
> the thing that IE was absolutely not doing, that made it a scourge of the web.
You're misremembering, IE also kept improving its support for modern standards. The two main problems were that it was always behind (like Safari) and that it people were still using old versions because it was tied to Windows, like Safari with iOS. When people don't update their iPhone because they know it will become slow as hell as soon as you use the new iOS version on an old iPhone or just because they don't want their UI to change AGAIN, they're stuck on an old version of Safari.
I'm sorry, but you're wrong. I am not remotely misremembering, and I'll thank you not to tell me what's happening in my own head.
IE 6 stood stagnant for years, while the W3C moved on without them, and there was no new version.
> The something being the other browsers. Chrome and Firefox.
And can you name a single thing Firefox does right, that Chrome didn't do first, or that came from an actual accepted web standard (not a proposal, not a de-facto standard because Chrome does it), that Safari doesn't do?
The reason why IE 6 kept haunting us all was because later versions were never available on Windows XP.
> actual accepted web standard
The only thing for which there is an actual standard that matters is JavaScript itself (or rather ECMAScript) and on that front Apple has pretty much always been a laggard.
Saying “Apple is compliant with all of W3C standards” is a bit ridiculous when this organization was obsolete long before Microsoft ditched IE. And Apple itself acknowledge that, themselves being one of the founding parties of the organization that effectively superseded W3C (WHATWG).
> The reason why IE 6 kept haunting us all was because later versions were never available on Windows XP.
First of all, according to the IE Wikipedia page, that's not true—7 & 8 were available for XP.
Second of all, this ignores the fact that for five years, there was only IE6. And IE6 was pretty awful.
> Saying “Apple is compliant with all of W3C standards” is a bit ridiculous when this organization was obsolete long before Microsoft ditched IE. And Apple itself acknowledge that, themselves being one of the founding parties of the organization that effectively superseded W3C (WHATWG).
And now you have identified a major component of the problem: in the 2000s, the W3C was the source of web standards. Safari, once it existed, was pretty good at following them; IE (especially IE6) was not.
Now, there effectively are no new standards except for what the big 3 (Safari, Chrome, and Firefox) all implement. And Firefox effectively never adds new web features themselves; they follow what the other two do.
So when you say "Safari is holding the web back," what you are saying is "Safari is not implementing all the things that Google puts into Chrome." Which is true! And there is some reason to be concerned about it! But it is also vital to acknowledge that Google is a competitor of Apple's, and many of the features they implement in Chrome, whether or not Google has published proposed standards for them, are being implemented unilaterally by Google, not based on any larger agreement with a standards body.
So painting it as if Apple is deliberately refusing to implement features that otherwise have the support of an impartial standards body, in order to cripple the web and push people to build native iOS apps, is, at the very best, poorly supported by evidence.
I prefer Firefox over Chromium. But I much more prefer having a working ad blocker. Therefore I support that statement and when Firefox starts removing support for that, I'm out and there's enough alternatives I can go to, even tho they're Chromium based.
I use the Duck Duck Go browser for almost everything. I is open source for iOS/Android/macOS platforms, but I think there are parts of their platform that are not. The DDG browser hits all my privacy requirements.
Apple doesn't collect your browsing data, they build in privacy controls that are pretty much as strong as they can manage given the state of the world, and while it doesn't support uBO, it supports a variety of pretty solid adblockers (I use AdGuard, which, AFAICT, Just Works™ and even blocks YouTube ads most of the time, despite their arms race).
That's what their marketing want you to believe, at least.
Their privacy policy is very clear it's not the case though:
> we may collect a variety of information, including:
> […]
> Usage Data. Data about your activity on and use of our offerings, such as app launches within our services, including browsing history; search history;
And users would flee not just because they're seeing the ads but because Firefox is obviously the slowest browser again. Stripping the ads is a big performance boost, so right now Firefox feels snappier than Chrome on ad-laden pages.
You can't even imagine how little the rest of the world cares about this.
Do people in California care that slightly under 50% of my state's population are at or below poverty level? Do they care that most of the rest spend 55-60% of our income on food? Do they care that our life expectancy is 15 years lower than that in California, mostly because of terrible pollution caused by extraction and processing of minerals which our beloved government then sells to the US and several European countries, and pockets the money?
Do they care about conflict minerals in general, used to build electronics for their enjoyment? Have they done anything about this?
This American political bickering does not even register on our radars when choosing a web browser.
"Europe's problems are the world's problems but the world's problems are not Europe's problems.", as India's Mr. Jaishankar is fond of saying.
Wasmtime, being an optimizing JIT, usually is ~10 times faster than Wasmi during execution.
However, execution is just one metric that might be of importance.
For example, Wasmi's lazy startup time is much better (~100-1000x) since it does not have to produce machine code. This can result in cases where Wasmi is done executing while Wasmtime is still generating machine code.
> Every iteration of the loop polls the network and input drivers, draws the desktop interface, runs one step of each active WASM application, and flushes the GPU framebuffer.
This is really interesting and I was wondering how you implemented that using Wasmi. Seems like the code for that is here:
> It might interest you that newer versions of Wasmi (v0.45+) extended the resumable function call feature to make it possible to yield upon running out of fuel:
That is really interesting! I remember looking for something like that in the Wasmi docs at some point but it must have been before that feature was implemented. I would probably have chosen a different design for the WASM apps if I had it.
I am really sorry I have waited so long to extend Wasmi's resumable calls with this very useful feature. :S
Feel free to message me if you ever plan to adjust your design to make use of it.
Not OP, but I'm confused how this would be helpful. You're saying for example, he can use this function to create a coroutine out of a function, begin it, and if the function fails by e.g. running out of memory, you can give the module more memory and then resume the coroutine? If so, how is that different than what naturally happens? Does wasm not have try/catch? Also, wouldn't the module then need to back up manually and retry the malloc after it failed? I'm so lost.
Wasmi's fuel metering can be thought of as is there was an adjustable counter and for each instruction that Wasmi executes this counter is decreased by some amount. If it reached 0 the resumable call will yield back to the host (in this case the OS) where it can be decided how to, or if, the call shall be resumed.
For efficiency reasons fuel metering in Wasmi is not implemented as described above but I wanted to provide a simple description.
With this, one is no longer reliant on clocks or on other measures to provide each call its own time frame by providing an amount of fuel for each Wasm app that can be renewed (or not) when it runs out of fuel. So this is useful for building a Wasm scheduler.
Wasmtime's epoch system was designed specifically to have a much, much lower performance impact than fuel metering, at the cost of being nondeterministic. Since different embeddings have different needs there, wasmtime provides both mechanisms. Turning epochs on should be trivial if your system provides any sort of concurrency: https://github.com/bytecodealliance/wasmtime/blob/main/examp...
I don't know how fuel metering in Wasmtime works and what its overhead is but keep in mind that Wasmi is an interpreter based Wasm runtime whereas Wasmtime generates machine code (JIT).
In past experiments I remember that fuel metering adds roughly 5-10% overhead to Wasmi executions. The trick is to not bump or decrease a counter for every single executed instruction but instead to group instructions together in so-called basic blocks and bump a counter for the whole group of instructions.
This is also the approach that is implemented by certain Wasm tools to add fuel metering to an existing Wasm binary.
This is really cool stuff. I've always wanted fuel-based work with a high level programming languages. Having a language compile to wasm with wasmi now seems like a nice way to achieve that.
I first encountered this with gas in the Ethereum VM. For Ethereum, they price different operations to reflect their real world cost: storing something forever on the blockchain is expensive whereas multiplying numbers is cheap
I’m not sure what it’s used for in this context or how instructions are weighted
Let's consider that you create a serverless platform which runs wasm/wasi code. The code can do an infinite loop and suck resources while blocking the thread that runs the code in the host. Now, with a fuel mechanism the code yields after a certain amount of instructions, giving the control back to the host. The host can then do things such as stop the guest from running, or store the amount of fuel to some database, bill the user and continue execution.
Thanks! I have lots more too. Are there directions in space? What kind of matter is fire made of? If you shine a laser into a box with one-way mirrors on the inside, will it reflect forever? Do ants feel like they're going in regular motion and we're just going in slow motion? Why do people mainly marry and make friends with people who look extraordinarily similar to themselves? How do futures work in Rust? Why is the C standard still behind a paywall? Let me know if you need any more great questions.
Flame is what you see when gases burn in the air. As the material burns, it breaks down and releases flammable gases, which burn too, giving the effect of flame. If you have ever tried burning fine-grade steel wool, you will have seen that it burns without any flame because the iron burns directly without making gases first.
"If you shine a laser into a box with one-way mirrors on the inside, will it reflect forever?"
No, because each reflection comes at a cost (some light transformed to heat)
"Why do people mainly marry and make friends with people who look extraordinarily similar to themselves?"
To not get so much surprises and have a more stable life. (I didn't choose that path.)
(But I feel it would be too much OT answering the other questions and don't want to distract from this great submission or the interesting Wasmi concept)
No, I do not accept this. There must be a way. What if the mirror box has a high enough heat? Would it work then? The box could be made of a heat resistant material, like fiberglass.
It's not that the mirror or box is damaged by heat, it's that each bit of heat energy comes from a bit of light energy. Eventually the light bounces enough times that there's no energy left in it.
I understand, but what I mean is, what if there is no more opportunity for the light to emit heat, because the surrounding environment is already saturated with so much heat that it can't accept more? Is this a possible way to prevent the light from emitting heat and therefore prevent the light from decreasing its luminousness? There must be a way!
A few notes:
* There's no such thing as "absolute hot" state that meansno more heat can be added
* Blackbody radiation means that above a certain temperature, regardless of what you make your mirror out of, it will be spontaneously emitting visible light at all times.
Indeed. There is only an absolute zero, that cannot get colder, but more heat is always possible as more heat means more rapid movement of elements. While absolute zero at 0 K means no movement.
Basically, what you propose negates the nature of reality. There is always friction/energy loss into heat (increased chaotic movement). Only way to deal with it, if you want permanent cycles, is constantly add energy in the same amount that is lost.
It's awesome that Wasmi is fast enough to run GUI apps. I'm working on an app runtime for making highly portable GUI apps. I'm targeting wasm because it seems to strike a good balance between performance and implementation simplicity. Ideally it would be possible to run apps on a runtime hacked together by a small team or even a single person. The fact that an interpreted (if highly optimized) wasm runtime like Wasmi is clearly capable of running GUI apps is exciting.
Thank you! And thanks for making Wasmi, it's a really impressive project and it's the reason why I decided to go this whole WASM sandbox route (because I could embed it easily) :)
Yeah it's one of those projects were I'm so impressed that I'm saying nothing because there's nothing to say, it's just really impressive. I'm not sure what will come of this project, but it has a lot of potential to at least inspire other projects or spark important discussions around its innovations.
Have you had experience implementing SSA via sea-of-nodes representation? Could it be that in this case dominance frontier is no longer important and one could use the simpler SSA construction algorithms that do not require dominance frontiers?
Yes this iterative process is indeed very visible. Wasmi started out as a mostly safe Rust interpreter and over time went more and more into a performance oriented direction.
Though I have to say that the "list of addresses" approach is not optimal in Rust today since Rust is missing explicit tail calls. Stitch applies some tricks to achieve tail calls in Rust but this has some drawbacks that are discussed in detail at Stitch's README.
Furthermore the "list of addresses" (or also known as threaded code dispatch) has some variance. From what I know both Wasm3 and Stitch use direct threaded code which stores a list of function pointers to instruction handlers and use tail calls or computed-goto to fetch the next instruction. The downside compared to bytecode is that direct threaded code uses more memory and also it is only faster when coupled with computed-goto or tail calls. Otherwise compilers nowadawys are pretty solid in their optimizations for loop-switch constructs and could technically even generate computed-goto-like code.
Thus, due to the lower memory usage, the downsides of using tail calls in Rust and the potential of compiler optimizations with loop-switch constructs we went for the bytecode approach in Wasmi.
when using lazy-unchecked translation with relatively small programs, setting up the Linker sometimes can take up the majority of the overall execution with ~50 host functions (which is a common average number). We are talking about microseconds, but microseconds come to play an importance at these scales. This is why for Wasmi we implemented the LinkerBuilder for a 120x speed-up. :)
I am aware of Wizard and I think it is a pretty interesting Wasm runtime. It would be really cool if it was part of Wasmi's benchmark testsuite (https://github.com/wasmi-labs/wasmi-benchmarks). Contributions to add more Wasm runtimes and more test cases are very welcome.
The non-WASI test cases are only for testing translation performance, thus their imports are not necessary to be satisfied. This would have been the case if the benchmarks tested instantiation performance instead. Usually instantiation is pretty fast though for most Wasm runtimes compared to translation time.
Wasmi is an independent project nowadays. And you are right that it was originally designed for efficient smart contract execution but with a scope for more general use.