Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You know what the internet needs? User agents.

We've got this idea stuck in our heads that only the website itself is allowed to curate content. Only Facebook gets to decide which Facebook posts to show us.

What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.

Instead of being tuned to line the pockets of Facebook, the AI is an agent of your own choosing. Maybe you want it to actually _reduce_ engagement after an hour of mindless browsing.

And not just for Facebook, but every website. Twitter, Instagram, etc. Even websites like Reddit, which are "user moderated", are still ultimately run by Reddit's algorithm and could instead be curated by _your_ agent.

I don't know. Maybe that will just make the echo chambers worse. But can it possibly make them worse than they already are? Are we really saying that an agent built by us, for us, will be worse than an agent built by Facebook for Facebook?

And isn't that how the internet used to be? Back when the scale of the internet wasn't so vast, people just ... skimmed everything themselves and decided what to engage with. So what I'm really driving at is some way to scale that up to what the internet has since become. Some way to build a tiny AI version of yourself that goes out and crawls the internet in ways that you personally can't, and return to you the things you would have wanted to engage with had it been possible for you to read all 1 trillion internet comments per minute.



The fundamental flaw is this:

The primary content no user wants to see any every user agent would filter out is ads. Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.

The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads. Every time you upload some adorable photo of your baby for your friends to ooh and aah over, you're giving Facebook free bait that they then use to trap your friends into looking at ads.


I sure am tired of hearing about "the fundamental flaw" in empowering people. What you describe is not a flaw in empowerment, it's a flaw in their business model, and it's one that can that be fixed (i.e. "innovate a better business model"). Can we stop propagating the idea that people who do not want to use their limited bandwidth and processing power to rasterize someone else's advertising are somehow "flawed"?

The only thing more insane than blaming users for having self-interest are the people who pretend that Facebook et al. are somehow owed the business model they have, painting ad-blockers as some kind of dangerous society-destabilizing technology instead of the commonsense response to shitty business practices it clearly is.


The point is that fighting Facebook on Facebook is a losing game and "innovate a better business model" has been tried, is being tried, and is not working because it is hard. A plan that does not work is indeed "flawed", no matter how noble and natural the intentions.


"Users don't care about your ad network" is not a "plan," it is a reality, and calling it "flawed" is just corporate propaganda. I'm sure you're arguing in good faith but the very foundation of this assessment is fundamentally incompatible with reality.


Well, no, today's model is "users tolerate your ad network in return for free content". Which is clearly true given that Facebook is making profits despite everyone and their mother grumbling about ads.

Something has to pay the bills.


Yes, tolerating the ad network definitely falls under "users don't care"... until they do, and they start taking measures to counteract it, and then we're back to "something has to pay the bills," which is where I'm suggesting some R&D investment might be wise.


> until they do, and they start taking measures to counteract it,

When? Adblock has been free and easy for years now


Adblock usage has been creeping upwards every year. It’s only the fact that most people use mobile devices, that slowed its growth.

Now, extensions to block ads are mature on mobile.

On some sites I run, I see 60% blocking on desktop, 10% on mobile. If it hits 60% on mobile as well, then the ads would cease to be profitable.


Do you have a source for those numbers? By your argument, advertising should already be unprofitable on desktop, which seems unlikely.


It's a site that they run, they are the source for those numbers.

A quick google says somewhere from 20%-40% use adblockers. 60% sounds high, but depending on the website, I can see it. For example, I bet at least 60% of visitors to gnu.org are blocking ads and/or trackers.


If not ads then what would pay the bills? We all know users won’t pay for a subscription service.


We all know users won’t pay enough for a subscription service to make a huge profit that makes the VC happy, makes the founder one of the ten richest people in the country, and supports a ton of offices and salaries in some of the priciest places to live and work.

But the tiny Mastodon server I run for myself, with a total user count in the low triple digits, costs about fifty bucks a month, and the users who are willing to pay cover half of that. I could probably get more of them to cover it if I was more aggressive about asking, but I prefer to keep it super low-key. I could also lower those costs if I felt like putting some work into optimizing it.

It’s not my job, it’s a thing I run on the side and put a few hours of technical work into every few months. I ain’t gonna get rich from it but it gives my friends a nice place to chat on the Internet.


Yeah but the vast majority of Facebook users don't care enough to learn how to use Mastodon. Try asking the average Social Security recipient to use IRC.

I reject the notion that "We all know users won’t pay enough for a subscription service to make a huge profit that makes the VC happy, makes the founder one of the ten richest people in the country etc" because...if you can figure out how to get Grandma to use a federated Mastodon-like service, then you would do just that.


“Hi Mom! We’ve decided to leave Facebook. Jane’s set up our own little substitute. It’s where we’ll be posting all the pictures of the kids from now on. I’ve written up the basics in this letter; if you have any questions we can talk about it when I see you next week.

— Suzy <3”


I don’t think what you describe is a good experience. You provide a benefit for hundreds of people, but can’t even get 25 dollars a month in return.

Contrast this with real life hobby groups, and service groups, where you can more easily raise money to cover costs.


I’m giving something to my friends. This makes them happy, and it makes me happy to see them happy. I’ve made a few new friends because of this, I’ve gotten to know some acquaintances better too. I am richer in my connections.

There’s a low-key buzz of occasional thanks and favors in my life that I wouldn’t get if I wasn’t doing this, either. And occasionally this connection lets one of my friends help out another who needs it, financially or emotionally, when they wouldn’t even necessarily be contact with each other, much less interested in helping out, without the shared space I’ve created.

I think this is a great experience.

(Total active user count is more like a few dozen, btw.)


Perhaps if people aren’t willing to pay for it the service isn’t that valuable after all.


I see two problems with this. First of all, the service Facebook provides isn't valuable at all unless all your friends and family are also using it and posting content. So unless you can get a critical mass of users to switch to a new platform with a different business model, it won't succeed. Secondly, we've become accustomed to not having to pay for social media, and asking to pay for a social media platform is a little like asking to pay for air. Sure, yours might not have as much pollution, but I can get something almost as good for free.

I've actually experienced the latter, as I looked for an alternative to Gmail. I just found it hard to justify paying for an email provider, where the only real value add to me is the absence of ads, and not being Gmail. And really, the price is mostly irrelevant. For me to be willing to pay _anything_, it would have to have a really compelling reason to move. The value of not seeing ads is just not that high for me. And I don't think you would say there isn't value in an email provider.


I use Fastmail for my important email because I want to be paying a company to take me seriously as a customer. They’re not going to just lock me out of my account because of some random abuse trigger elsewhere in their system. You’re probably not seeing the value because the bad thing hasn’t happened to you yet, but it might and when it does there’s not much recourse.


Perhaps if people don't find a service valuable, then we should everyone to stop using it.

If that argument sounded absurd to you, it's probably because it is. The services are valuable because people ultimately do use them — a lot of them, even. They pay for them indirectly by agreeing to look at ads.

There are loads of services we do not directly pay for, like the fire department and the public library — and yet they are immensely valuable.


The argument sounds nonsensical because it’s missing a verb.

I don’t agree that just because people will use a free thing that means it has a lot of value. Note I didn’t say that FB has no value, just that it might not be as valuable as one might think.

Considering most of their value is their messenger platform I don’t think FB is really worth much at all beyond their social graph.


> The argument sounds nonsensical because it’s missing a verb.

That's a typo on my part — it doesn't change the veracity of the argument.

> I don’t agree that just because people will use a free thing that means it has a lot of value. Note I didn’t say that FB has no value, just that it might not be as valuable as one might think.

Sure, but how do you measure the "true" value? If you can answer that question, you will probably become a billionaire.

> Considering most of their value is their messenger platform I don’t think FB is really worth much at all beyond their social graph.

What are you basing this on? You may only find the messenger platform to be valuable, but how do you know how others perceive the FB platform/product?


I’m going on what I’ve observed in FB users around the world. Their most dedicated users are people in developing countries whom they have convinced that Facebook is the internet.


People do pay for Facebook, by being exposed to ads.


Facebook is only 16 years old. The idea of social networks is only a few years older than that. Surely we can't have tried and failed at every possible alternative already?


I can buy web hosting for less than a coffee per month that can sling thousands of static HTTP requests per second. In a world where something like Mastodon/GNU Social was the norm, any hobbyist could opt to run one fraction of a grand federated social network out of the goodness of their hearts, for spare change, or for a small fee to their users.

Centralized, siloed social networks are only expensive to run because they're centralized. Things were better when the norm was to start a blog instead of using someone else's walled garden.


It appears to me that the parent post was criticizing the ad-based business model - not endorsing it.


I'm complaining about the structure of the dialog around this issue, not casting aspersions on the parent post's argument itself. It's impossible to have a reasonable discussion when the terminology in use is strongly prejudiced against one of the key parties in the relationship.


Reality is strongly prejudiced against one of the key parties in the relationship. Users, today, tolerate ads in exchange for free content. Any reasonable discussion — where continued delivery of content is a desired end goal — needs to come up with an answer for how we pay for it. Calling out "fundamental flaws" is one such way of doing that.


Stating things nakedly, using the assumptions and perspective of the big guy, can be a powerful rhetorical style when advocating for the little guy. See any of Chomsky's political writing as an example.


> I sure am tired of hearing about "the fundamental flaw" in empowering people.

I'm all for empowering people. But adding personally controlled user agents to Facebook is a fundamentally flawed solution. There is no path for that to succeed because the primary content users will want to filter is ads, and the primary content Facebook needs people to see is ads. Thus user agents are an existential threat to Facebook and since Facebook controls all the content, they will ensure user agents are not allowed.

The core business model does not align Facebook's incentives with user's incentives. You can't fix that at the content level.


> it's a flaw in their business model, and it's one that can that be fixed (i.e. "innovate a better business model")

Doesn't that assume that there is a better business model?


Yes. They're based on users voluntarily giving payment for services rendered and, importantly, being satisfied with the transaction.


Which company uses that business model and has a size comparable to Facebook?


Apple.


I thought it was obvious I meant from the sector in question. My mistake, I'll try again.

Do you know of a social media company (or ad company using a product to garner the data, if that helps) that uses that model with a comparable size to Facebook? If not, are there any that are making ground?


"The ultimate problem is that Facebook doesn't want to show you good, enrishing content from your friends and family."

Well, it is someone else's website. What do you expect Zuckerberg has his own interests in mind.

In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's.

If exchanging content with friends and family (not swaths of the public who visit Facebook - hello advertisers) is the ultimate goal, then there are more efficient ways to to do that without using Zuckerberg's website.

The challenge is to make those easier to set up.

For example, if each group of friends and family were on the same small overlay network they set up themselves, connecting to each other peer-to-peer, it would be much more difficult for advertisers to reach them. Every group of friends and family on a different network instead of every group of friends and family all using the same third party, public website on the same network, the internet.

Naysayers will point to the difficulty setting up such networks. No one outside of salaried programmers paid to do it wants to even attempt to write "user agents" today because the "standard", a ridiculously large set of "features", most of which benefit advertisers not users, is far too complex. What happens when we simplify the "standard"? As an analogy, look at how much easier is is to set up Wireguard, software written more or less by one person, than it is to set up OpenVPN.


I don't think that "user-agents" are the hard part either. At this point, I think any grad student would happily write a NN implementation that took various posts as input, and returned to you a sorted list based on your preferences (with input layers like: bag of words, author, links, time, etc, that the user could put more or less weight into just by simple upvote/downvote).

The problem is that no one has the incentive to host such a service for free, and users wants the content to be available 24/7. So it's not as simple as just setting up a peer-to-peer network. Users who just use a phone as their primary computer will still want to be able to publish to their millions of followers, and so it wouldn't work to have those millions of people connect directly to this person's device. Maybe you can solve that with a bit-torrent like approach, but the problem gets harder when you include the ability to send messages privately.


"Users who just use a phone as their primary computer will still want to able to publich to their millions of followers, and so it wouldn't work to have these millions of people connect directly to this person's device."

You have shifted the discussion from small overlay network for friends and family to large overlay network for "millions of followers".

Those methods of sharing content with "millions of followers" are already available and will no doubt continue to be available.

A small private network is a different idea, with a different purpose. People will always have the choice of using a public network however a small overlay can avoid sending traffic through third party servers, like Facebook's.

There is no requirement that a service has to be "free", or supported by ads. This is something else you injected in the discussion. I use free software to set up overlays, but I have to pay for internet service and hosting. The cost of the "service" is not the setup it is the internet access and hosting.


Your idea doesn't sound particularly tenable. A pay social network that limits you to only your close friends and family, that will have no network effects, have less features, and be more difficult to set up...

It's easy for people to point what they don't like about Facebook, but I don't think you are really comprehending why they dominated the social network space to begin with. It's not as easy as making a product that doesn't advertise, if it costs money to use instead.


Tenable for what? You are injecting your own ideas. Why would I care about the reasons Facebook is popular. The idea I submitted was to make the process of setting up overlay networks easier, not to try to start a web-based business. Making software easier to use is a tenable idea. The ideas you are introducing might not be tenable. However, they are not my ideas. The pattern I see is you introduce some idea, attribute it to me, then shoot it down.

https://en.wikipedia.org/wiki/Straw_man


> The problem is that no one has the incentive to host such a service for free

matrix.org is approaching this


I guess it's worth clarifying what exactly you're talking about. If you want to share images and text with a small group of people, okay, that might be useful in some cases. But that's not the use case that Facebook users have in mind - you're setting the bar almost comically low and the impact on the actual landscape of the web will be exactly zero.

If you mean something that can make a splash in the social media space to address the "user agents on Facebook" problem, color me skeptical about the prospect of competing on useful features with the Facebook behemoth while fighting the complexity up and down the stack of making everything decentralized and trying to make it friction-free for casual users to run, with no funding from ads, and starting out at square one on network effects. Yes, Mastodon is a possible counter to this line of argument, but Twitter is so stagnant and their product is so simple that I feel like it's almost a unique case. And for most people the Facebook use case includes being able to find all their real-life contacts; by that measure even Mastodon would fail.


You can be as skeptical as you want to be. Even assuming this idea was intended to "make a splash" (it isn't), these sort of comments make no difference whatsoever. We have all seen how HN commenters have criticised ideas that, rightly or wrongly, later went on to become successful businesses. The thing is, this is definitely not intended to be a business. It is just some software that exists and that works. If it works for me, then it is "successful". There is no budding "founder" to shoot down. Just a user with some software that works. The assumptions of "make a splash in the social media space" and "competing with Facebook" are all wrong.

The business of Facebook is not the "comically low" bar of sharing text and images with friends and family. An overlay solution that avoids sending traffic to a third party server is not "competing with Facebook". However it could be used to avoid Facebook which is the point we are discussing here. The fact that only a small number of people actually use a solution does not mean it is a "failure". If the software is relatively small, compiles fast, runs on different OS and architectures, stays available for download and reliably works as intended, then to me it is "successful". The way I evaluate "success" and "failure" of a software is probably different from many commenters/readers.


And, secondarily to your main point, the idea that "sharing images and text with a small group of people" is some weird niche case that Facebook users don't care about is... pretty off-base. I'd say it's the main use case for the majority of Facebook users.


i have a fundamental issue calling a content-curating, psychological-experiment-running platform visited by hundreds of millions of people daily 'someone else's website'. the fact that it is privately owned doesn't matter if nation states use it to wage information wars against other nation states' citizens. to make matters worse, the 'someone else' in question knows about it perfectly well and is fine with it because it means he's showing more ads.

this is plain and simple fucked up.


Well, that is what it is. No one knew a single website could grow so large, but it did. Even though there are thousands of people working for its owner, when reading articles like the OP we are reminded how much control he still has over it. No doubt he still thinks of it as his personal creation. Of course, "99.99999%" of the content is not his. Perhaps most of the people who sign up on Facebook are not employed by nation states but just ordinary people who want an easy way to stay connected to friends and family. Maybe these people should have a better way to stay connected than using a public website.


indeed this happened but that's hardly a reason to let it continue to be.


Why are you defending Zuckerberg for being a dick? If you have power, you have responsibility: full fucking stop.

The idea that it's okay to be a selfish child with power is tantamount to allowing driving while drunk. Power is deadly, you can just as easily crush a persons life as you could their legs with a car. Don't drive drunk, don't be in power if you can't be a responsible citizen about it.


Power does not imply responsibility, nor visa versa, there is definitely a venn diagram here. Dictators - all power, no responsibility. Manager of a homeless shelter - lots of responsibility, little power.

I believe, with nothing to back this up, that social media would be improved with a better educated population. One that knows and values the basic tenets of critical thinking and debate. No amount of policing will make people smarter or more courteous. People have to choose to be more civil and be more interested in views counter to their own.


The poster doesn't defending Zuckerberg, merely trying to explain why Zuckerberg did what he did.

>If you have power, you have responsibility: full fucking stop.

No, if you have power you get to decide the rule.

Same as drunk driving, the reason we are not allowing driving while drunk is because the people who are against drunk driving has more power than the people who are pro drunk driving. The side that has more power get to decide what the law is.


You're confusing legal responsibility with responsibility proper.

Dictators are responsible for the death of millions, even if they make laws that say otherwise.


Responsibility proper is subjective.

>Dictators are responsible for the death of millions.

The Dictators very well may believe that their action is the proper responsibility (According to him).

My point is you can't simply ask them to stop by telling them they have responsibility. Both side may view responsibility differently.


Excuse my ignorance, but isn't the overlay network setup problem one that has problems at almost every level of the stack? If there is not any definitive technical problems to overcome, why is it not possible to create a mobile app that friends and family could use as their own private network?

Isn't the internet supposed to be every node acting as it's own server and client simultaneously anyways? Is the problem just the inability to truly decentralize discovery, registry, and identity authentication of nodes in the network? Or is the problem that most ISPs don't want people operating services out of their homes or off of their phones?


"Excuse my ignorance, but isn't the overlay network setup problem one that has problems at almost every level of the stack?"

It works for me and has worked for others. The keyword here that distinguishes this idea from almost every other "peer-to-peer overlay network" project that you can read about is "small". If you limit the size of the network, you can avoid some problems. Most projects you read about aim at the ability to create a single, large network that potentially everyone can join. Open to the public. However using a different approach it is possible to create only small networks that are only open to people you know, e.g., friends, family, co-workers. There are still problems, but there are always going to be some problems. The internet you are using right now has problems. The question is does it work well enough. The small overlay network idea has worked well enough for me that I consider it one of the better ones. It is really impossible to debate these ideas on the internet. Opinions are strong and negativity is even stronger. If you want an answer you need to try things out yourself and draw your own conslusions. No peer-to-peer solution is "perfect" and if you are always looking for the solution with zero negatives and zero limitations, you will never find it. Worse is if you never actually try these solutions, you just read about them. After you try many of them and learn what you like and do not like about the design/implementation, it is easier to chose one idea that works for your use case. Everytime someone starts promoting a peer-to-peer project you can quickly evaluate it, based on what types of designs/implementations have worked for you and which ones didn't. Well, that's my opinion, anyway.


It may not be that hard to set up things for yourself, have been toying with something like that for messaging https://cweb.gitlab.io/StoneAge.html. The deeper question is how to sustain this kind of products and make them competitive without comparable funding.


Email exists. Wordpress has free site hosting. SMS is ad free. If people wanted to create a free webpage for friends and family to see there are loads of options. The problem is that, generally speaking, most people do not find the value proposition as a Facebook replacement compelling.


Where are you getting this idea of "Facebook replacement"?

You introduce your own idea, then shoot it down.

https://en.wikipedia.org/wiki/Straw_man

Facebook does not provide software to set up small overlay networks.

Nor is the business of Facebook to provide messaging or free web pages. That is just bait. The business of Facebook is providing a way to advertise to targeted audiences within the billions of people who visit Facebook. That is what people pay for. It is a website with billions of daily visitors.

Unless you have a webite with billions of visitors then you are not competing with Facebook.

A small overlay network is not a website, it is a computer network. It does not have a large audience because the number of nodes is small. It is an interface you see when you type ifconfig or ip addr, not an email server, a blog website or an SMS provider. You could could use it for those things or you could use it for other things, anything you can do on a LAN.

It makes zero difference what you think people want. This is not a popularity contest of any sort. If you want to argue against the premise of my comment then you need to argue that small, personally-managed overlay networks and peer to peer connections are less efficient for sharing content with friends and family than using a public website, subject to "curation" and "censorship".


> In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's [for] exchanging content with friends and family

This is patently not true. People were using email for this prior to Facebook's existence, which worked well enough ("Share photos via email! Share videos via email! Here's a funny story from grandma! RSVP to my birthday!") and was painless to set up; in fact, all the people who have Facebooks also have emails! But very few people are doing this anymore as their main mode of communication with friends and family; this kind of activity is now happening on Facebook, where people are happy with the ease that it facilitates. Small networks works fine, but nobody's interested, hence why Facebook is worth billions and billions of dollars and is a large website.


You say "nobody's interested". That of course is "patently false". For one, pwdisswordfish2 is apparently interested. The author and other users of the software he/she says she uses are obviously interested. There are numerous projects that aim to use overlays. The overlay idea was used by one company that sold multiple times, ultimately to Microsoft for billions of dollars. The HN readers who upvoted the comment describing small overlays are presumably interested in seeing the idea presented in a comment.

The stated purpose of this forum is intellectual curiosity. There is nothing in bobthepanda's comments that is directed at that stated purpose. Why is he/she arguing about email usage when the quoted sentence refers to setting up websites? This looks like more of the "straw man" argument technique. https://en.wikipedia.org/wiki/Straw_man


That's because the real value of Facebook is in the social graph, and it's locked in there.


Some families already have their own slack. chat, post articles, share pics, and even put content in appropriate channels that you're free to join or leave! I already do this with my friends group of about 20 cuz it's more than good enough.


> The primary content no user wants to see any every user agent would filter out is ads

"no user"? Nope. People buy magazines, that are 90% ads. Subscribe to newsletters. Hunt for coupons. Watch home shopping channels. Etc, etc.

There's large part of population that wants to see ads. Scammy and bad ads? No. Good and relevant ads? A LOT of people do want them. Even tech-folks, who claim that ads are worst thing for humanity. Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?


> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?

No, not at all. Ads don't just inform about deals, they also incite you to buy. I don't want to buy just because some random company got the chance to put some psychological manipulation in front of my eyeballs. If I want or need something, I will specifically seek it out. If I don't specifically seek it out, then I probably don't really need it.

If the price (heh) of that is that I miss out on some better deals for stuff that I would otherwise seek out, I'm fully happy eating that cost.


> If I want or need something, I will specifically seek it out

Sometimes we don't know that we have a problem that is easily solvable. As an example, I had no idea that there was an industry to reduce aws spending. One day a marketer made it past my filters and made me aware of an industry that I benefit from today. Just one example of many where marketing is win-win.

On the other hand I agree that there is too much psychological manipulation, unfortunately it exists because it works. Maybe this is where we need to disrupt the industry through a different approach. Just brainstorming, what if we regulated advertising to prohibit emotional/manipulative messages and rely instead on advertising facts?


> Ads don't just inform about deals, they also incite you to buy. I don't want to buy just because some random company got the chance to put some psychological manipulation in front of my eyeballs.

I'm curious why you don't want to encounter these "manipulations". Is it because of the brainpower it uses to identify them as such? Are you particularly susceptible to giving in to them? Something else? I'm genuinely curious.


Not sure about him, but from my point of view ads mostly leave a slimy, oily, nauseating feel in my brain and make me want to gouge my eyes out.


I purposefully read a site called OzBargain, which is basically a compilation of ads. Sometimes I buy stuff I don’t need. Nonetheless, I like OzBargain.


>I don't want to buy just because some random company got the chance to put some psychological manipulation in front of my eyeballs

Lots of people say this online, I just don't believe them. Visual psychological manipulation works well enough on most people.


That makes no sense to me, I bought a lot of cool stuff just because I saw an ads on instagram :|


> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?

No. I want less intrusions on my senses, not more. I buy few things, and ads are irrelevant for the central ones: groceries and utilities are dependent on my place of residence.

If given the choice, I’d welcome with open arms the chance to never see an ad again anywhere, including offline. I’ve never seen an ad with a deal as good as that one, and I doubt I ever will.


There's a fundamental mismatch between how consumers value ads shown to them and how advertising platforms value the ads.

Suppose there is a $100 product that produces $110 worth of value for a specific person and costs $50 to produce and deliver to them. To use economics terms, there's a $10 consumer surplus here, and a $50 producer surplus. The consumer willingness to see the ad is proportional to the consumer surplus, while the producer's willingness to pay for ad placement is proportional to producer surplus.

This is a fundamental divergence in interests; because the ad networks are paid by producers, they'll serve the ads that tend to make ad buyers the most money, not the ads that best enrich the viewers' lives.


Honestly a truly well-made ad targeting on me, something that really ticked all the right boxes... gosh I'd spend 3-4× more, easily. Because ordinarily I just don't lose time on shopping websites, but I'm kinda of a spender when I do find things that I like. And I'm a sucker for good deals, I'm a patient prey hunter, I can wait months in ambush for the right deal.

Anecdotally of course, as one of those tech-folks who's got nothing against good and relevant ads.


> Anecdotally of course, as one of those tech-folks who's got nothing against good and relevant ads.

Be careful what you wish for. This TED talk[0] on persuasion architectures brings up some interesting moral arguments against this idea. If we generally accept that ad conversion is a metric of an ad's success, how do we handle the situation where people that are susceptible to addictive behaviors like gambling and compulsive shopping are exposed to ads that exploit their condition? Data-driven advertising systems are built particularly well to find and exploit these kinds of people. We already see this a lot in the scam call/email world. Once you fall for one, the amount of inbound traffic you receive from scammers increases dramatically.

0: https://www.youtube.com/watch?v=iFTWM7HV2UI&t=872s


> how do we handle the situation where people that are susceptible to addictive behaviors like gambling and compulsive shopping are exposed to ads that exploit their condition?

Ads like that used to be common. By your description, it sounds like they died out on their own.


How? Data-driven ad systems are built specifically to target people who have a high probability of conversion. The perfect person for such a system is one that would constantly (i.e. impulsively) buy or convert.


I guess I just don't buy enough things to want to see any ads at all. I don't have any particular issue selecting products either.

A well targeted ad doesn't need to serve you well, it just needs to convince you to buy; I have a lot of faith in myself and I'd say those are often the same thing, but experiences may differ.


The “ads just aren’t good enough yet” poster while right doesn’t really take into account the terrifying things companies would ask to know about you in order to properly target those ads


Marketing is typically about getting to the customer before the customer does any homework. The more informed the customer is, the less conversions the marketing heavy company will get. This is because they spend their budget on marketing (thus you seeing their ad as opposed to the competition). That means they don’t have the same resources dedicated to their product.

Sure at some scale, marketing becomes a requirement to build your brand for many other purposes and I will also concede that in some cases, like with services that hinge on network effect, marketing is required for the product to even become viable. But most things are not social networks and we would be better served to learn of the product in context with its competition in a less biased setting.


> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?

Sure. The thing is: when people are interested in something, they look for it. When I want to buy some games, I open my PS4's store. Any advertising I get there is absolutely fine because I asked for it. I told the software to show me the stuff that's available for me to buy.

The problem is when I open a link to some web page and 80% of my phone's screen turns into advertising noise I didn't ask for and couldn't care less about. I make it a point to delete this noise.


Honestly I think the only way to make an ethical social network is to make a non-profit one. Fund it alongside other public goods like PBS, public education, highways, rail networks, healthcare, etc.

And yes, I know: good luck getting THAT to happen in the US given how badly funded everything else in my list is. If you’re in another country that actually funds public goods maybe this is a thing you could talk to some of your fellow techies about and make a proposal, especially if your country is getting increasingly tired of Facebook?

Alternatively, ground-up local funding of federated social networks might be workable; I run a Mastodon server for myself and a small group of my friends and acquaintances, with the costs pretty much evenly split between myself and the users who have money to spare. It is not without its flaws and problems but it is a thing I can generally keep going with a small investment of my spare time every few months.


The solution is to nationalize FaceBook (and any site that proves to be similar) and allow everyone free access to it w/o censorship. Give Zuckerberg a $1 for his time and effort.

Meahwhile, Google must be split up: e.g., Alphabet can go do AI work as a private corporation but Google Search needs to be nationalized (or split into at least 3 entities - i prefer nationalization).

Amazon must be broken up and parts nationalized, but this is a more complex case for later discussion.


A) These companies are American, and outside of certain, very specific, industries explicitly mentioned in the War Powers Act, the American government has no authority to nationalize them at any time for any reason.

B) Companies like Facebook in particular can never be nationalized due to the First & Fourth Amendments, and government being barred from competing with private entities in the private sector.

C) I do agree that Google and Amazon should be tried for anti-trust violations and that Google in particular should be split for violating anti-trust laws against vertical integration across markets.


Ok, but which country would Google Search have to be nationalized into?

The brits won't be happy if Google is nationalized into a french public service


> The primary content no user wants to see any every user agent would filter out is ads.

Not that you're wrong, but: that's the fucking point!

Advertising delenda est.


Pay for Facebook then. 1.5% of total YouTube users subscribe for YT Premium. I love how the smartest minds will ignore the most primitive economics. Ads work. For everyone. Except deluded.


You know the worst part? I do pay for YT Premium and yet Google still finds a way to throw ads at me on Youtube videos through videos it suggests via Google Feeds (the leftmost screen on a Pixel). I bloody pay for the service and yet I am still getting ads on any youtube video I play when clicking any youtube suggested video on that feed. How annoying do you think that is? When you give in and pay, yet you are still getting harassed.


I had this issue as well and logging out (in my case, switching accounts) wiping Google app cache, and I think rebooting to make Pixel Launcher refresh, then logging back into the YT Premium subscribed account fixed it for me. Convoluted I know, but I think the issue was it wasn't picking up some profile variable or token denoting me as a subscriber. Hope that helps!


Hey, thanks for this, just did this. Will now just have to wait and see until Google suggests me a new youtube video to see if it worked.


I knew that google feed was advertising, any reading about google news and google feed that talks about this? I have a pixel, same findings.


To be honest, I'd happily pay for YT Premium if Google didn't use my data to personalise other results and content on the internet. I personally stop using products/services that dictate what content is deemed "suitable" for my consumption. I'll happily be served adverts so long as I'm not getting manipulated.


If everybody paid for facebook, it would have as many ads, if not more. That companies would leave money on the table with no incentive to do so is a bizarre self-justifying myth that people who live off advertising tell themselves.

You pay for cable. Paying customers are a better audience for ads than deadbeats.


But everybody doesn't pay for Facebook, and the reason they don't do so is because Facebook is funded by ads and no one has paid for a Facebook without ads. But sure, Facebook might hypothetically still have ads if users paid, and my grandmother might hypothetically be a bicycle if she had wheels.


Facebook's purpose of existence isn't for somebody to "pay for it".

Facebook's purpose of existence is to make money.


And the way they make money is by someone paying for it. Some sites collect payments from users directly. Facebook collects payments from advertisers because most users wouldn't use Facebook if they had to pay for it.

Do you have a point? This reads like an unfinished thought.


there is absolutely no reason that they cant do both at the same time, there is a mental short circuit going on when people think paying for a website means no ads. Its never meant that in pretty much any other media.


Yes there is a reason, users often don't expect to pay for services that show ads, and users who pay for services often don't expect to see ads. That's why most popular online services don't mix the two, eg. Facebook, Spotify, YouTube, Netflix, Crunchyroll, Google, etc.

"mental short circuit" better describes your argument that jumps from "it's possible to both show ads and charge user fees" to "they always will".


Where did you get that number? That's actually far, far higher than I would have thought and quite encouraging...


On Feb 4th, Google said there were 20 million Youtube Premium users, and I believe the latest estimates put Youtube at 2 billion users, which would be a 1% subscription rate.


It would be interesting to see how they count a "user" too. If a good portion of those 2 billion people don't use YouTube very much then the % of users who are using it regularly and subscribing might be a lot higher.


The irony is that there's a disincentive to skip ads on 'paid' users even more -- because people willing to pay for things are even more valuable to advertisers -- so if you make a paid option with no ads, you're also gutting the value of the freemium-ads option (beyond the average user loss)


Paid-for Facebook would be a viable business if it wasn't competing with free-facebook. It's not ignoring economics to think that Facebook is causing significant negative externalities that ought to be priced or regulated to allow more ethical alternatives to thrive.


free facebook should be regulated out of existence. what else is free that is good for you? in big cities you have to pay for clean air to breathe already.


That does not tell us much. Where can we look at YouTube's balnce sheet? There is likely more to YouTube as a business than selling ads on YouTube. For one, YouTube under Google is like AC Nielson on steroids. The combination easily rivals any "smart" TV.


I paid for youtube for a while, but I did not get a different algorithm. It was the same feed of addictive, stressful content. I stopped paying once I noticed this.


I already don't get ads on youtube, so I don't see why I should pay for this when I can get all of the benefit with none of the expense.


> Pay for Facebook

You can do that?


Compare how much money google can make off ads in a month to $15. You’re paying for way, way, way more than just removing ads and it’s obvious


That's why you get other benefits that are much harder to price in.


....and most don’t use or want.

Of course they conveniently only bundle these features so they can pretend it’s what customers want—hell downloading and playing when the screen is off should be a part of youtube. Just another step in the long saga of companies intentionally crippling their own services to bilk customers.


I don't use Facebook, and I want it to die. Facebook is one of the many, many reasons why advertising must be destroyed.


How? AFAIK you can’t pay for Facebook. There’s no “Facebook Premium”. There’s only Ad Spigot Facebook.


Let's imagine for a moment that a decentralized social network actually took off.

How long until those ads crop back up anyway? Instagram should give us some idea on how sponsored content might look in such a system. According to some random site, the average price for a "sponsored" instagram post is $300. You think your friends are above showing you an ad when real money is on the line? Maybe they won't be making that kind of money with very few followers, but when Pizzahut asks you to post an ad in exchange for a free pizza, I think you'll see plenty of takers. Now, granted, at least the people being paid are your friends, instead of Zuck.


But the kind of people who get $300 for a post should have a very large group of followers. Which should imply that there should be a reasonably large group of people prepared to part with some money in support of their work.

And for the small group of people it seems to me that it should be easily self correcting by normal social cues since there’s no network effect to offset it.


This played out in LiveJournal, and the outcome, generally speaking, was that paid promotions were too blatant to be effective outside of influencers proper (where it's part of the explicit contract between them and their audience).


> Since ads are the primary way sites stay in business

Flaw? It seems that the point would be to force FB to transact with currency rather than a bait-and-switch tactic. The site would also be more usable if they were forced to change business model.


That is how it is today. But does it have to be like that ? What is the minimum revenue per user required for service like FB to run.

While everyone is sceptical on whether such a service can reach critical mass to make financial sense, a brand new FB replacement may not be able to do it, However FB itself can certainly give that as an option without hurting their revenues substantially.

I was sceptical on the value prop for Youtube Premium, I am constantly surprised how many people pay for it, if google can afford to loose ad money with YT premium, I am sure FB can build a financial model around a freemium offering if they wanted to.


Minimum doesn't matter, the only question is if it's more profitable than the current approach. Facebook makes $9/user/quarter. That's every user no matter how little they use the site.

The issue however is that the users advertisers care about are the ones with disposable income. The users most likely to opt out of ads are the ones with disposable income. Thus the marginal cost to Facebook from such users is significantly more than $9/quarter.


>>> The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads.

>> That is how it is today. But does it have to be like that ? What is the minimum revenue per user required for service like FB to run.

> Minimum doesn't matter, the only question is if it's more profitable than the current approach.

Only if you think strictly inside the box.

The real problem here is one is a misalignment of incentives: Zuckerberg is managing Facebook to maximize the metric he's being evaluated on (profit and wealth), not the value provided to society.


Value to society is subjective and many horrors of the past (and current) were caused by trying to optimize specific definitions of that term.


And many horrors of the past and present were and are caused by optimizing for profits. So it's not like we get to side-step horror-avoidance.

I'd start with treating users as partners, and not cattle.


>So it's not like we get to side-step horror-avoidance.

Which is my point. It's easy to say "just force them to optimize for value to society" and then ignore what that really entails in practice. And what giving someone the power to do that tends to cause.

Actually solving the related problems and making things actually better is probably possible but it's messy and hard and complicated.


"force them to optimize" doesn't mean putting a gun to somebody's head. But it's clear that the existing socioeconomic arrangements overall incentivize companies, including Facebook, to do a lot of harm while chasing profits. Forcing them to not do so can also mean changing the arrangements to remove the incentives.


> It's easy to say "just force them to optimize for value to society" and then ignore what that really entails in practice. And what giving someone the power to do that tends to cause.

> Actually solving the related problems and making things actually better is probably possible but it's messy and hard and complicated.

What you describe is politics, and it's inescapable.


> I'd start with treating users as partners, and not cattle.

What does this mean in practice? How do you treat 2.6 billion people as partners?


> I am sure FB can build a financial model around a freemium offering if they wanted to.

They probably could. As they could also charge you a premium and then profit two times on top of you — with your fee and then by selling your data to third parties. Why? Because who would know that was happening? Corporations have no moral compass dictating their actions. The bottom line being what's best for investors.


Google uses my watching data in YouTube premium paying or not. They only claim not to show ads , that’s what most users care about . Even if FB sells / uses the data , as long as you don’t see ads and promoted content there are enough people who will pay for it

20 million for YT @ 5$/month is more than 1 billion in revenue per year. Given that 20M is just 1% of the user base , there is probably not a lot impact on the ad revenue either.


In don’t see Facebook surviving such a transition. Without the manipulation and data mining for engagement you’re just left with a few features that is probably easily subsumed by other services in some federated fashion. It would provably look as exciting as hosted e-mail from the business side.


Only a minority of users would pay for it. <1% of the users going by YT numbers .

It is not either free or paid , it can be both . It satisfies a need for the people willing to pay for the privilege of not seeing ads and gives additional revenue even.

No radical business changes required only a ideological change to treat their users as humans in required from FB


I think another tangential but related issue is with how these companies measure success. They measure success by engagement, and things that drive the most user engagement aren't usually the best for the user.

YouTube has been getting a lot of flack for this recently.


I am happy to pay $10 bucks a month for a facebook like service that suggests me engaging, high quality content without any promotional content.

I am already getting such service in the form of Android's newsfeed feature on pixel. Its google but its pretty good.


You mean the screen to the left of the home screen? I have noticed a lot of low quality information and some advertising masquerading as articles.


> The good content is just a necessary evil to make you tolerate looking at ads.

You could make the same argument for Google or other online web sites relying on ads as the primary source of revenue.


You can, and if you look at Google's actions long term, they do.


>Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.

Not all users hate ads in principle, just in practice. In theory, you'd be making the users select ads for relevance and not being annoying. But obviously, the site wants to show ads based on how much they're paying and "not being annoying" only factors in if pushes people off the site entirely.


How are the user agents funded? Probably through ads.

The problem is actually how to fund the timeline publication services. But systems like Medium etc seem to work OK.

I am now spending several hundred dollars a year on content subscriptions. Plus subscriptions for Gmail, Zoom and a few other things where I have outgrown the free service. A freemium model for the timeline publication services would probably work.


Facebook would be perfectly happy to eliminate all ads if instead you paid a small or even tiny monthly fee. But you won't pay it.


No, they wouldn't, unless advertising gets banned. They'd instead accept your payment and find a way to shove ads in anyway, in a covert or overt way, just as many paid services do, because why leave money on the table?


YouTube and Spotify went that subscription route and it works well. There are no ads.


Spotify is reliant on copyright exclusions to keep competing free services at bay. So aren’t really providing any value.

YouTube is still reliant on ads, and ad-funded content creators, and will continue serving you manipulative content wether you pay or not. So can’t really count that as a success either.

The thing is when you remove the ugly side of those business what remains can bee done better for free using p2p networks or federation over open protocols.


> The thing is when you remove the ugly side of those business what remains can be done better for free using p2p networks or federation over open protocols.

Then why hasn't it happened?


Because the ugly side of those businesses remains.


If you don't count "influencers" as ads.


I don't know about that. I buy a lot of things. I wish something would help me buy what I need and didn't know I needed so I didn't have to spend time shopping and researching.


According to this research, it isn't even good content. It's divisive content intended to force you to pick a "side" then fight for it


Not true, I want to see good ads, ads that relate to what I actually want to buy.


maybe you're interested in hearing about X tech, or you can tell your "Agent" that you want to buy Y thing, or travel to Z. That's where ads and reviews get thru.


Still waiting for that ublock origin web browser.


This feels like saying "the fundamental flaw about email spam blocking is won't somebody think of the spammers?"


I think transparency matters more. I liked Andrew Yang’s suggestion to require the recommendation algorithms of the largest social networks to be open sourced given how they can shape public discourse and advertising in all mass media is regulated to prevent outright lies from being spread by major institutions (although an individual certainly may do so).


Open sourcing the algorithms (however we define it) does absolutely nothing. What use is a neural network architecture? Or a trained NN with some weights? Or an explanation that says - we measure similar posts by this metric and after you click on something we start serving you similar posts? None of those things are secret. More transparency wouldn't change anything because even if completely different algorithms were used, the fundamental problems with the platform would be exactly the same.


It's silly to so confidently assert that opening up a closed source algorithm to 3rd party analysis will "do absolutely nothing". How could you possibly know there is nothing unusual in the code without having audited it yourself?

Seeing how the sausage gets made certainly can make lots of people lose their taste for it.


A lot of how big systems work is embodied in large neural networks, and the detailed structure of how they make decisions is an open research problem. So it’s not silly for OP to state that at all; it’s empirical fact.

It’s also not possible to audit the code for anything unusual without taking it all the way back through all tools source, all hardware, down through the chips, through the doping in the chips, and even lower. This stack is such a hard problem that DARPA has run programs for a long time to address this. Start by reading Thompson’s ACM article titled something like “Reflections on Trustimg Trust” where he shows code audits don’t catch program behavior, then follow the past few decades where these holes have been pushed through the entire computing stack.


Toolchain compromises are a non-zero risk but involve a lot of orchestrated resources to subvert systems to meaningful effect (simple exfil is sufficient for most corporate espionage v. stuff like Stuxnet to enact specific changes covertly). A company doing that given legislation to keep recommendation behavior and policies transparent to the public would be violating the spirit of the regulation by creating more opacity and delusion, no question. Admittedly, they're not going to be prosecuted in our current regulatory cyberpunk-esque hellscape, but neither would any public-benefit regulation pass anyway making the discussion of subversion moot, right? So presuming such a societal environment where regulation _could_ pass, we would hopefully have a more effective regulatory policy framework where subversion of the intent to be transparent for the sake of public safety and trust while still protecting trade secrets would be under sufficient scrutiny. All I know is that engineer-activists like Jaron Lanier are working with more tech-aware activists / politicians like Yang in proposing more effective tech regulatory frameworks than the past, and their efforts should be a lot more effective than either the current collective actions of the throwing up of our hands or yelling, whining, and screaming hoarsely.

From a regulatory standpoint mirroring the nature of our organizational tendencies, I posit that the _policy_ models should look similar to Mickens' security threat vector model - Not-Mossad or Mossad.


>It’s also not possible to audit the code for anything unusual without taking it all the way back through all tools source, all hardware, down through the chips, through the doping in the chips, and even lower.

Are you implying that since we can't audit every single thing, auditing anything is useless?


>Are you implying that since we can't audit every single thing, auditing anything is useless?

No, I am pointing out that your statement implying one can know if there's anything unusual in the code can be found by an audit. It cannot. And most of the activity by big companies is not in code, it's in data, and "auditing" it is currently beyond anything on the near horizon. Some of the behavior is un-auditable in the Halting Problem sense - i.e., the things you'd want to know are non-computable.


>No, I am pointing out that your statement implying one can know if there's anything unusual in the code can be found by an audit. It cannot.

This is so plainly false its silly. Have you ever heard of a code review? What do you think security researchers do? Google Project Zero. Plenty of things are found all the time at the higher (and lower) levels of the stack, even if something unknown remains deep within.

>And most of the activity by big companies is not in code, it's in data it is currently beyond anything on the near horizon.

Audits have no problem finding out what type of data is being collected (see PCI || HIPAA compliance). That would be a great start: for people to be made explicitly aware of all the data points that are being collected on them.


>This is so plainly false its silly.

You're simply wrong. Did you read the article that shows quite clearly exactly how to do this I told you about? No, you didn't, or you'd stop making this false claim. Before you repeat this, RTFA, which I'll post again since you didn't learn last time [1].

There, read it? There's decades of research into even deeper, more sophisticated ways to hide behavior. At the lowest level, against a malicious actor, there is no current way to ensure lack of bad behavior.

>What do you think security researchers do?

Yes, I've worked on security research projects for decades, winning millions of dollars for govt projects to do so. I am quite aware of the state of the art. You don't seem to be aware of basic things decades old.

Do you actually work in security research?

>Plenty of things are found all the time

You're confusing finding accidental bugs with an actor trying to hide behavior. The latter you will not find if the actor is as big as a FAANG or nation state.

If simply looking at things was sufficient, then the DoD wouldn't be afraid of Chinese made software or chips - they could simply look, right? But they know this is a fool's errand. They spend literally billions working this problem, year in and year out, for decades. It's naive that you think simple audits will root out bad behavior against malicious actors.

Even accidental bugs live in huge, opensource projects for decades, passing audit after audit, only to be exploited decades later. These are accidental. How many could an actor like NSA implant with their resources that would survive your audits?

Oh, did I mention [1]? Read it again. Read followup papers. Do some original research in this vein, and write some papers. Give talks on security about these techniques to other researchers. I've done all that. I have a pretty good idea how this works.

>what type of data is being collected

Again, you miss. I am not talking about the data being collected. I'm talking about the data in big systems that make decisions. NNs and all sorts of other AI-ish systems run huge parts of all the big companies, and these cannot yet be understood - it is literally an open research problem. Check DoD SBIR lists for the many, many places they're paying to have researchers (me, for example - I write proposals for this money) to help solve this problem. For the tip of the iceberg, read on adversarial image recognition and the arms race to detect or prevent it and how deep that rabbit hole goes.

Now audit my image classifier and tell me which adversarial image systems it is weak against. Tell me if I embedded any adversarial behavior into it. Oh yeah, you cannot do either, because it's an unsolved (and possibly unsolvable) problem.

Now do this for every learning system, such as Amazon's recommender system, for Facebook's ad placement algorithms, for Google's search results. You literally cannot.

Don't bother replying until you understand the paper - it shows that a code audit will not turn up malicious behavior if the owner is actively trying to hide stuff from you.

[1] https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...


>Do you actually work in security research?

Yep!

>You're confusing finding accidental bugs with an actor trying to hide behavior. The latter you will not find if the actor is as big as a FAANG or nation state.

Wrong, wrong, wrong. Here's but one of many examples: https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...

>it shows that a code audit will not turn up malicious behavior if the owner is actively trying to hide stuff from you.

Yes, code obfuscation is a thing, but it's not a perfect silver bullet like you falsely claim, and often only serves to slow down researchers.

Of course it is true though that many bugs and vulnerabilities remain hidden which we may never find, and yet that's not a valid reason not to look for them, because there are many which are found every single day.

The perfect is the enemy of the good.


>> Do you actually work in security research? > Yep!

Your comment history is interestingly lacking any evidence of that. Care to demonstrate it?

>>The latter you will not find if the actor is ...

> Wrong, wrong, wrong.

Tell me how you can audit an image classifier to ensure it won't claim a specially marked tank is not a rabbit, where an enemy built the classifier, and where you don't know what markings the enemy will use. You're given the neural net code and all the weights, so you can run the system to your hearts content on your own hardware. Explain how to audit, please.

Good luck. It's bizarre you claim people can find such things, when it's a huge current research problem, and it's open if such things can even be demonstrated at all.

Same thing for literally any medium or large neural network. None can be audited for proof of no bad behavior.

>code obfuscation is a thing, but it's not a perfect silver bullet like you falsely claim,

I've never said code obfuscation. Unless you understand the paper which you seem to repeatedly ignore, you'll keep making the same mistake.

The paper demonstrates how to remove the code that has the behavior while still embedding the bad behavior in the product. You cannot find it from auditing the product code. There is zero trace in the source code. The attack in the paper has been pushed through all layers of computing stack since then, and now is at the quantum level. And as these effects become more important, there can be no audit since what you want to know runs up against physics, such as the No Cloning Theorem.

That you don't realize this is possible is why you keep making the same error that looking at source code will tell you what a product does.

If your naive belief in this were that simple there would not be literally billions of dollars available for you to do what you claim you can. DoD/DARPA/Secure foundries would love to see your magic methods.

Have you read the paper? Maybe it will show you the tip of the iceberg on why audits are used to find accidents, but are much weaker against adversarial actors, to the point of not providing any value for really complex systems.

>The perfect is the enemy of the good.

I'm not saying don't audit. I'm pointing out your initial claim that this will find anything unusual. It can find common errors. It can find bad behavior inserted by unskilled people. But against groups that know about current work, you won't find anything, in the same way you cannot audit a neural network.

That you continue to think code obfuscation is the only way to embed bad behavior in a stack shows that you're unaware of a large section of security research. Read the paper.


>I'm not saying don't audit. I'm pointing out your initial claim that this will find anything unusual.

By "unusual" I mean anything that has intended or unintended negative effects on society, such as what was seen with Cambridge Analytica, or FBs emotional manipulation studies.

>It can find common errors. It can find bad behavior inserted by unskilled people. But against groups that know about current work, you won't find anything

Yep, and without a 3rd party audit, we can't even begin to approximate the degree of hypothetical bad behavior that exists affecting billions of people due to regular developers doing what they're told by their product managers (or of their own volition), let alone a nation state APT.


You keep ignoring both how to audit NNs and how to address behavior not in code. Have you read the paper yet? Explain how audits work in light of the paper.

Without answering those you’re simply wasting time and effort by claiming audits can find things they cannot.

I quite doubt you work in security from your inability to grasp these things. Please demonstrate you’re not lying. Your posting history shows a tendency to be a conspiracy believer, and there’s zero evidence you do anything professionally in security, unlike the history of those I know that do work in security.


The entire stack contains enough holes as to be swiss cheese, auditing the open code means nothing if and when something before that code in the stack manipulates the outcome of the code. This is one of the reasons those big security issues in Intel CPUs the last few years were such a big deal. The entire stack needs to be reworked at this point.


>auditing the open code means nothing if and when something before that code in the stack manipulates the outcome of the code.

In terms of software security vulnerabilities, there is so much low hanging fruit making exploitation trivial. Even if a small team within an intelligence agency knows about a zero day deep in the stack, addressing vulnerabilities higher up in the stack that are easily exploited by script kiddies necessarily reduces attack surface.

However, what we're talking about here is not so much about security vulnerabilities, as it is about design flaws (or features) which have harmful effects on society.


There isn't a simple fix, or likely any "fix," for the issues you want to be knowable. Besides the economic impossibility of it, there are too many places to hide behavior that we cannot foresee due to quantum effects, complexity, etc. So reworking the entire stack is not reasonable or likely very beneficial.

It's better to incrementally address issues as they are found and weighted.


Not the recommendation engines. The graph. All the social media companies (and indeed Google and others) profit by putting up a wall and then allowing people to look at individual leaves of a tree behind the wall, 50% of which is grown with the help of people's own requests. You go to the window, submit your query, and receive a small number of leaves.

These companies do provide some value by building the infrastructure and so on. But the graph itself is kept proprietary, most likely because it is not copyrightable.


The graph in itself is pretty close to privacy issues that border closely as well. Even if FB et al were government funded that wouldn’t make it good either. And said data could be considered competitive advantages but perhaps not. If everyone got a copy of various social networks’ friends lists, the number of viable alternatives would skyrocket quickly because the lock-in effect would be gone. Perhaps this needs to be theorized more along modernized anti-trust laws (which don’t work in a tech given anti-trust laws were based around trying to lower consumer prices).


Yeah, pretty much. It's easy for Facebook to claim that it's popular and the best thing going when you specifically need a FB account for contacting people


>advertising in all mass media is regulated to prevent outright lies from being spread

Advertising in mass media is regulated. You are very much allowed to publish claims that the government would characterize as outright lies, you just can't do it to sell a product.


Does that actually work? If they create some complex AI and then show us the trained model, it doesn't really give much insight into the AI doing the recommendation. You could potentially test certain articles to see if it is recommended, but reverse engineering how the AI recommends it would be far more time consuming than updating the AI. As such Facebook would just need to regularly update the AI faster than researchers can determine how it works to hide how their code works. Older versions of the AI would eventually be cracked open (as much as a large matrix of numbers representing a neural network could be), but between it being a trained model with a bunch of numbers and Facebook having a never version I think they'll be able to hide behind "oops there was a problem, but don't worry our training has made the model much better now".


It would make it clear or not whether the site tries at all to restrict certain recommendations like harmful content at least and the model would be different and less subject to top-down rules / policies like recommending government propaganda sites over independent sources. It could be used in later, better worded and targeted subpoenas for how said filtering and censoring works. It would also show if there exists a special promotion system for a company’s own products and so forth. In many respects, it acts like an org chart and to determine _what_ to scrutinize with more concrete actions as regulators and the public. It provides a map and that’s better than a black box or Skinner Box where we are the subjects.


Setting aside the concerns about the efficacy of the idea, it also seems like an arbitrary encroachment on business prerogatives. I think everyone agrees that social media companies need more regulation, but mandating technical business process directives based on active user totals isn't workable, not the least of which because the definition of "active user" is highly subjective (especially if there is an incentive to get creative about the numbers), but also because something like "open source the recommendation algorithm" isn't a simple request that can be made on demand, especially with the inevitable enfilade of corporate lawyering to establish battle lines around the bounds of intellectual property that companies would still be allowed to control vs that which they would be forced to abdicate to the public domain.


The risk is that it behaves like a reinforcement learning algorithm which essentially rewards itself by making you more predictable, I'd argue that's what curated social networks do today.

If you're unpredictable you're a problem. Thus, it makes sense to slowly push you to a pole so you conform to a group's preferences and are easier to predict.

A hole in my own argument is that today's networks are incentivized to do increase engagement where a neutral agent is in most ways not.

So perhaps the problem isn't just the need for agents but for a proper business model where the reward isn't eyeball time as it is today.


> If you're unpredictable you're a problem.

But you are predictable, even if you think you are unpredictable, you are just a bit more adventurous. Algorithm can capture that as well. It will be easier for algorithm that works on your behalf.


This makes me think of a talk with an AI-optimistic Microsoft sales guy I had a few years ago. His argument was essentially the same:"Look, it's no problem to have an AI curate everything for you because the algorithm will just know what you want, even if your habits are unusual!"

Of course this hasn't happened yet and I doubt it ever will. Maybe I'm just insane, but most of the recommendations from services I have fed data for hundreds of hours (YouTube) are actually repulsive.


Interesting because I think I have some rather random assortment of hobbies that generally tend to have no overlap and I get pretty good recommendations all the time.


It gets pretty bad when the hobbies have some political affinity to them, and it's opposite from your actual politics.



> So perhaps the problem isn't just the need for agents but for a proper business model where the reward isn't eyeball time as it is today.

I've been on this for years. Free is a lie, and the idea that everything has to be "free as in beer" is a huge reason so many things suck.


What you’re referring to is splitting the presentation from the content. The server (eg Facebook) provides you with the content, and your computer/software displays it to your liking (ie without ads and spam and algorithmically recommended crap).

There’s a lot of history around that split, and the motivation for HTML/CSS was about separating presentation from the content in many ways. For another example, once upon a time a lot of chat services ran over XMPP, and you could chat with a Facebook friend from your Google Hangouts account. Of course, both Google and Facebook stopped supporting it pretty quickly to focus on the “experience” of their own chat software.

The thing is that there is very little money to be made selling content, and a lot to be made controlling the presentation. So everyone focuses on the latter, and that’s why we live in a software world of walled gardens that work very hard to not let you see your own data.

There is some EU legislation proposal that may make things a bit better (social network interop), but given the outsized capital and power of internet companies i’m not holding my breath.


> you could chat with a Facebook friend from your Google Hangouts account

This was never true. There was an XMPP-speaking endpoint into Facebook's proprietary chat system, but it wasn't a S2S XMPP implementation and never federated with anything. It was useful for using FBChat in Adium or Pidgin, but not for talking to GChat XMPP users.


I don't know about Facebook but Google Talk was federated at some point [1].

[1] https://googletalk.blogspot.com/2006/01/xmpp-federation.html


Yep, Google's was. They never enabled server-to-server TLS, though, so GTalk was effectively cut off from the federated XMPP network after May 2014 when that became mandatory: https://blog.prosody.im/mandatory-encryption-on-xmpp-starts-...


Your friends provide you with the content, not Facebook. You only need Facebook now because you don’t have a 24/7 agent swapping content on your behalf and presenting it how you like it.


That’s a very good point. One line of thinking I’m interested in is social networking over email.

Everyone has email, so you could imagine a social networking app that’s just a thin layer over your email, and every interaction is encoded as an email being sent under the hood. Want to share a picture with your friends? Send an email. Someone wants to comment on it? They just send an email. Etc.

The main purpose of the app would be to offer a nice, device responsive, consistent presentation. Additionally if this were an open, documented standard, an entire ecosystem of “email apps” could emerge.

(Of course as far as your actual email account goes you’d want to auto archive the emails + not get notifications for them, but that’s easily configurable)


We could do all the same things on the web, so long as the standards are open. But that's exactly the problem - lock-in is how social networks make profits, so the largest ones (where most people already are) are also the least likely to support anything like this.


Separating presentation and content is one way to do it, but it's not the only way.

For example, Facebook could create some kind of plugin API that allows you to interpose your filtering/ranking code between their content and their presentation.

For example, maybe they give you a list of N possible main page feed items each with its own ID. Your code then returns an ordered list of M <= N IDs of the things that should go into your feed. That would allow you to filter out the ones you don't want and have the most interesting stuff displayed first. Facebook could display the M items you've chosen along with ads interspersed.

Something like that could run in the browser or Facebook could even allow you to host your algorithm in a sandbox on their servers if that helps performance. (Which means you trust them to actually run it, but you have to trust them on some basic things if you're going to use their service at all.)

In other words, changing the acoustics of the echo chamber doesn't mean you need to be the one implementing a big chunk of the system. You just need a way to exert control over the part you want to customize.


RSS is yet another example of separating content from presentation.


I don't see this as a bad thing. I experience this as a good thing.

The RSS feeds I subscribe to give me plenty of "presentation" or "branding". Logos, written descriptions [both short- and long-form], clear names of what I am subscribing to, URLs. Just the right amount for me, in fact; if I wanted to go to their website(s) for their particular buffet of blog posts, featured puff pieces on Page Five, twitter mentions, &c I can do that ... or not. I'm glad I don't have to if I don't want to, and all of these folks are more than able to drop into their RSS feed a "Please go here for our tour information with new stuff in our online shop" mention just as you are able to go straight to some website full of deep-thumping media flashing into your senses as you get to where you want to go instead of using RSS.


Agreed completely. RSS is an example of what content–presentation separation could be if we made it more prevalent across the web.

There seems to be a steady thread of this sentiment here on HN, yet over the years no one has quite cracked this nut. Solutions welcome!


ActivityPub and other federated networks are the answer. They do exactly that: if you aren't satisfied with the rules on existing servers, you host your own. The network itself is wide open, and its control is distributed across many server admins. The way the content is presented is of course completely up to the software the user is running. Having no financial incentive to make UX a dumpster fire visible from space also helps a lot.


They're not the answer as long as they don't have loads of people. The attraction of FB and the like is that almost everyone has a FB account, just like almost every public figure has a twitter account. The downside of things like Mastodon is how do you know what server you want to connect to? For a non-technical user it doesn't offer any more obvious utility than a FB group.


There is indeed the problem of discovery that Mastodon doesn't feel like addressing. Like, you pick a server, make an account and now what? There's no way to bring your existing social graph with you. Even if your friends are there, you won't ever find them without asking each and every one about their username@domain. But I have some ideas on fixing that for my fediverse project — like making a DHT out of instance servers, thus making global search possible while keeping the whole thing decentralized.


I agree this is the main problem with ActivityPub, can you elaborate on your DHT idea? I'm thinking of doing something similar. How can the search be fast if the data is on many nodes, will you store the cache on a single instance and update it on some interval?


To be honest, I haven't really explored this yet, it's just that DHT feels like the most sensible approach. It's a rather ambitious project and I'm currently making the core functionality work (and interoperate with Mastodon where applicable). I'll probably do a Show HN post at some point.


> The attraction of FB and the like is that almost everyone has a FB account

This is exactly why FB should be forced to federate with other social networks, by law.


But in a way they already do, by going full Hoover over the social media floor and buying them outright.


Federate, not annex.


I like this, and so does my friend Confirmation Bias, who is pretty clear that the AI would select completely unbiased content relevant to me, not limited by any of the Bias family. It would be 100% better than the bias filters in place now, because my thoughts and selections are always unbiased, IMHO. (FYI: Obviously I'm not being serious. You clearly knew that, this notice is for the other person who didn't.)


> I don't know. Maybe that will just make the echo chambers worse.

This.

Also. What incentive does a walled garden even have to allow something like this? Put a different way, what incentive does a walled garden have to not just block this "user agent"? Because the UA would effectively be replacing the walled garden's own "algo curated new feed" - except if the user builds their own AI bot -- the walled garden can't make money the way they currently do.

I think the idea is very interesting. I personally believe digital UA's will have a place in the future. But in this scenario I couldn't see it working.


True, but we have ad blockers and they're effective. They're effective against the largest, richest companies in the world. There are various reasons for that, but at the end of the day it remains true that I can use YouTube without ads if I choose to. There's clearly a place in the world for pro-user curation, even if that's not in FAANG's best interests. I think it's antithetical to the Hacker ethos to not pursue an idea just because it's bad for mega-corps.


Mega-corps don't stop themselves from pursuing an idea just because it's bad for the hoi polloi, so why should we?


> What if, instead, you had a personal AI

I was in agreement with you until I read that. People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever. Many people prefer that, though, out of laziness. It’s like the laugh track on sitcoms because people were too stupid or tuned out to catch the poorly written jokes even with pausing and other unnecessarily directed focus. It’s all because you are still thinking in terms of content and broadcast. Anybody can create content. Off loading that to AI is just more of the same but worse.

Instead imagine an online social application experience that is fully decentralized without a server in the middle, like a telephone conversation. Everybody is a content provider amongst their personal contacts. Provided complete decentralization and end-to-end encryption imagine how much more immersive your online experience can be without the most obvious concerns of security and privacy with the web as it is now. You could share access to the hardware, file system, copy/paste text/files, stream media, and of course original content.

> And isn't that how the internet used to be?

The web is not the internet. When you are so laser focused on web content I can see why they are indistinguishable.


I think your suggestion is a bit out of scope for what's actually being discussed/not really a solution.

I'm active on the somewhat (not fully) decentralized social medium Fediverse (more widely known as Mastodon, but it's more than that) and I think a lack of curation is a problem: Posts by people who post a lot while I'm active are very likely to be seen, those by infrequent posters active while I'm not very likely to go unnoticed.

How would your proposed system (that seems a bit utopic and vague from that comment, to be honest) deal with that?


> People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever.

If the AI is entirely under the user's control, why not? It's like having a buddie that's doing for me what I'd do for myself, if I had the time and energy (and eyebleach).


I like this idea.

In response to it just creating more echo chambers:

- it can't be worse than now - At minimum, it's an echo chamber of your own creation instead of being manipulated by FB. There's value in that, ethically. - Giving people choice at scale means it will at least improve the situation for some people.


Isn't facebook (and reddit, and twitter) showing you posts by people companies etc. that you decided to follow? (And some ads)?

I am pretty sure things can be worse than right now, pretending like we are in some kind of hell state at the bottom of some well where it can't possibly be worse, seems unrealistic to me.


I've seen Twitter pull tweets from an account merely because someone I follow follows them. Facebook is the same.

I think Reddit sticks strictly to your subscriptions, unless you go to /r/all.


Neal Stephenson explores something like your “user agent” idea and comes up with an different solution in his novel “ Fall; or, Dodge in Hell.”

Spoilers ahead:

In Stephenson‘s world people can hire “editors” to curate what they see, and those editors effectively determine reality for people at a mass scale. This is just one of the many fascinating ideas Stephenson explores and I highly recommend reading the book.

This interview covers some of the details if you’re not willing to dive into an 800+ page novel:

https://www.pcmag.com/news/neal-stephenson-explains-his-visi...


Highly recommend reading Reamde first if you can. The story is entirely different, but is the same world and comes chronologically first; I felt the continuity added a lot when reading Fall.


Sounds pretty similar to the concept of “software agents” which was popular in the mid ‘90s: http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/iee-re...

Part of the concept was that the agents would actually roam onto servers on the internet on your behalf raising complicated questions around how to sandbox the agent code (came in useful for VPSs and AWS-style lambdas in the end).


At Baitblock (https://baitblock.app), we're working on something similar. It's called the Intelligent Blocker, and has the same intended goal as your user agent 'AI' (not yet open to general public, under development right now). With it you will be able to block all Facebook posts that are for example say not from your family, or not of a specific type or from specific person.

Or comments on different Internet forums that are blatantly spammy/SEO gaming etc.

Or block authors in search results or Twitter feed or any comment that you don't like. Basically the Zapier of content filtering.

This will be available to the user as a subscription service.

Some of these thigs are not possible on mobile platforms (Android, iOS) unfortunately because the OS do not allow such access, but we hope that Android and iOS in the future open up to allow external curation systems, apart from the app platform it's self as it's in the interest of the user.


I think the overwhelming majority of users don’t want to deal with this kind of detail. IMO most people would end up using some kind of preset that matched their preferred bubble.


I haven't touched this in years, but one time I made a little project[1] to analyze the people I was following on Twitter and recommend who I might want to unfollow based on their attitudes. People who posted negative stuff very frequently were at the top of my list to ditch; I don't need extra input pushing me toward misery. The first few runs were very illuminated, but not surprising, like "wow, now that you mention it, Joe does saw awful stuff approximately hourly".

I would love to have an agent that could apply those sorts of analyses to my data sources. In my case, I wouldn't want to filter out bad news, but unnecessarily nasty spins on it. I'd find that super valuable.

[1] https://github.com/kstrauser/judgish


We're a small team working in stealth on this exact challenge. Shoot me a note if you're interested in hearing more or getting involved. itshelikos@gmail.com


This type of thing is nothing new, but it's important to recognize that it doesn't take off because it's illegal.

As soon as Facebook realizes you're a risk, you'll get a C&D ordering you to stop accessing their servers. These typically have the force of law under the CFAA.

You won't access their servers, but just read the page that the user already downloaded? You'll still get nailed under the Copyright Act.

"User agents" in the sense used by the OP are as old as the internet itself. There's an active, serious, and quiet effort to abuse outdated legislation to ensure that they never become a problem.


good luck


I mean Facebook doesn't really decide what content I see, I do. I aggressively police my timeline and unfollow people who post garbage content. I don't really need an AI to do that for me...


do you cross-validate what the algorithm serves to you with full timelines of people you follow? how do you do that?


Another early assumption about the internet and computers in general is that users were going to exert large amounts of control over the software and systems they use. This assumption has thus far been apparently invalidated, as people by far prefer to be mere consumers of software that are designed to make its designers money. Even OSS is largely driven by companies who need to run monetized infrastructure, though perhaps you don't pay for it directly.

Given that users are generally not interested in exerting a high level of sophisticated control over software they use, how then is the concept of a user agent AI/filter any different at a fundamental level? It probably won't be created and maintained as a public benefit in any meaningful way, and users will not be programming and tuning the AI as needed to deliver the needed accuracy. I don't think AI has yet reached a level of sophistication where content as broad a range as what's found on the internet (or even just Facebook) can be curated to engage the human intellect beyond measuring addictive engagement, without significant user intervention.

Hopefully I'm wrong, as I do wish I could engage with something like Facebook without having to deal with ads or with content curated to get my blood boiling. Sometimes I do wonder how much it is Facebook vs. human tendency under the guise of an online persona, as both are clearly involved here.


Are there research papers you could link that prove users don’t want to exert more control? I’d be interested in reading more.

It always appears to me that the companies are making it difficult and users have no voice.


There are models for this that could probably work. Tim Berners-Lee has been working on a scheme called Solid for years now.

It is important to realize that Facebook is not the first, second or even tenth of its ilk. FaceBook combines a bunch of ideas from previous systems, in particular MySpace and USENET. It is more or less the third generation of Web Social Media. There is no reason to believe there can't be a fourth.

My interest in these schemes is to provide a discussion space that is end-to-end encrypted so that the cloud service collecting the comments does not have access to the plaintext. This allows for 'Enterprise' type discussion of things such as RFPs and patent applications. I am not looking to provide a consumer service (at this stage).

The system you describe could be implemented in a reasonably straightforward fashion. Everyone posts to the timeline service of their choice and choose between a collection of user agents discovering interesting content for them to read. These aggregation services could be a paid service or advertising supported. Timeline publishing services might need a different funding model of course but bit shoveling isn't very expensive these days. Perhaps it could be bundled with video conferencing capabilities, password management or any of the systems people already pay for.

As for when the Internet/Web was not so vast. One of my claims to fame is the last person to finish surfing the Web which I did in October 1992 shortly after meeting Tim Berners-Lee. It took me an entire four days of night shifts to surf every page of every site in the CERN index.


In the context of this discussion Solid sounds amazing. I'd be super excited to tune the social web to my own preferences. Sadly however, I couldn't make heads or tails of this garbage jargon laden website. WTF?

https://inrupt.com

"Time to reset the balance of power on the web and reignite its true potential.

When Sir Tim Berners-Lee invented the web, it was intended for everyone. The excitement and creativity of its early days were driven from the notion that we can all participate — and the impact was world-changing.

But the web has shifted from its original promise — and it’s time to make a change.

We can still unlock the true promise of the web by decentralizing the power that’s currently centralized in the hands of a few. How? By using the power of Solid.

Solid is the technically potent, open-source platform built to decentralize the web. Inrupt is the company that’s helping to fuel Solid’s success."

What? How will that help me or my grandma?


Why would a personal AI which curates your content be any “better” than FB’s AI which curates your content? Isn’t the current AI based on what you end up engaging with anyway? If you naturally engage in a variety of content across all ideological spectrums, than that’s what the FB AI is going to predict for you. Unfortunately, the vast majority of us engage with content which reinforces our existing worldview - which is exactly what would happen with a personal AI.


Because an algorithm under your control can be tweaked by you. Could be as simple as reordering topics on a list of preferences. Facebook's algorithm can't be controlled like that. Also, an algorithm you own won't change itself unbeknownst to you.


Sure, but I bet 99.9% of the people will not tweak their personal algorithm and will end up with the same result as FB.


I tried building this 10 years ago as a startup. Maybe time to revisit, the zeitgeist is turning more and more towards this and computing power has gotten cheap enough ...


This misses the point. Facebook refuses to look inwardly or mess with their core moneymaker, regardless of how it affects people. Noone is ever going to sip from the firehose just like we'll never again get a simple view of friend's posts sorted by creation date.

I think the real problem is Facebook's need to be such a large company. They brought this on themselves trying to take over the world. Maybe they need a Bell-style breakup


Zuck doesn't care about anything healthy as long as that healthy content reduces ad revenue/and or user activity(MAU/DAU) metrics. Basically he wants to extract enough time/money from each user to just be bearable for that user that they do not leave the site in disgust. Once you realize this cardinal truth from FB all the reprehensible actions from Zuck/senior leaders make perfect sense.


I like the line of thinking, but who actually provides the agent, and what are their incentives?

This is far from a perfect analogy, but compare it to the problem of email spam. People first tried to fight it with client-side Bayes keyword filters. It turns out it wasn't nearly as simple as that, and to solve a problem that complicated, you basically need people working on it full time to keep pace.

Ranking and filtering a Facebook feed would have different challenges, of course. It's not all about adversaries (though there are some); it's also about modeling what you find interesting or important. But that's pretty complicated too. Your one friend shared a woodworking project and your other friend shared travel photos. Which one(s) of those are you interested in? And when someone posts political stuff, is that something you find interesting, or is it something you prefer to keep separate from Facebook? There are a lot of different types of things people post, so the scope of figuring out what's important is pretty big.


There are a few projects working on an next-gen agent-centric distributed web. Holochain is my favourite

"Holochain is an open source framework for building fully distributed, peer-to-peer applications.

Holochain is BitTorrent + Git + Cryptographic Signatures + Peer Validation + Gossip.

Holochain apps are versatile, resilient, scalable, and thousands of times more efficient than blockchain (no token or mining required). The purpose of Holochain is to enable humans to interact with each other by mutual-consent to a shared set of rules, without relying on any authority to dictate or unilaterally change those rules. Peer-to-peer interaction means you own and control your data, with no intermediary (e.g., Google, Facebook, Uber) collecting, selling, or losing it.

Data ownership also enables new frontiers of user agency, letting you do more with your data (imagine truly personal A.I., whose purpose is to serve you, rather than the corporation that created it). With the user at the center, composable and customizable applications become possible."

http://developer.holochain.org and http://holo.host Apps building using the Holochain framework/pattern: http://junto.love using

Also great are http://scuttlebutt.co.nz and https://docs.datproject.org/. Fritter is Twitter built on the DAT protocol (Paul Frazee) https://twitter.com/taravancil/status/949310662760632320


I am thinking about the concept of “the last mile to user’s attention”.

Currently, software clients of Mastodon or Twitter hold that mile. Mastodon gives all content unfiltered, which could be too much at times, while Twitter does some oft-annoying opaque black magic in its timeline algorithms.

A better solution would be to have a protocol for capability that filters content with logic under your control. A universal middleware standard that is GUI-agnostic, can fit different content types.

By adopting this, open/federated social could start catching up on content filtering features to for-profit social (in a no-dark-patterns way, benefitting user experience), hopefully stealing users.

Ideally it could be used by the likes of Twitter and Facebook—of course, given the size of for-profit social, such an integration would take some unimaginably big player to motivate them to adopt (the state of their APIs is telling), but if it’s there there’s a chance.


Excellent idea, soon this will be a requirement for using the web in any productive way, considering the ratio of good information to junk info is getting worse rapidly. We already do this in a way; only visiting certain sites that we like and following certain users. A personal AI would make this process much more efficient.

I do see a content filtering AI as very difficult to achieve, and I don't think it will be possible for quite some time. There are so many small problems, even getting AI to recognize targeted content is difficult, given that websites can have infinitely different layouts. And what about video or audio? The most practical way to achieve a content AI would be to persuade websites to voluntarily add standardized tags so that the only problem becomes predicting and filtering. Although I could see some issues with that like people trying to game the system.


I agree - wasnt the browser intended to be the user agent? And counterpoint to some of the replies to you, surely people can just pay instead of sites being ad-based, what other industries operate in this absurd way? The public must think there’s no cost to creating software if everythings always free.


Facebook figured out how to bring a Usenet flame war to the masses and profit from it, job well done!


> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.

That would be great. Having an artificial intelligence as a user agent would be perfect. That'd be the ideal browser. So many science fiction worlds have the concept of an intelligent navigator who acts on behalf of its operator in the virtual world, greatly reducing its complexity.

Today's artificial intelligences cannot be trusted to act on our best interests. They belong to companies and run on their computers. Even if the software's open source, the data needed to make it useful remains proprietary.


It’s really not as sophisticated, but these guys[1] created an extension that in addition to their main objective of analyzing Facebook’s algorithm also offers a way to create your own Facebook feed. If I got it right, they analyze posts their users see, categorize them by topic and then let you create your own RSS feed with only the topics you want to see.

It’s not clear to me whether you may see posts collected by other users or only ones from your own feed and it seems highly experimental.

[1] https://facebook.tracking.exposed/


> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.

There is a feedback problem, though, which is that your preferences are modified by what you see. So the AI problem devolves to showing you the kind of content that makes you want to see more of it, i.e. maximize engagement. I think a lot of people are addicted to controversy, "rage porn," anger-inducing content, and these agents are not going to help with this issue.

If we could train AI agents to analyze the preferences of people, I think the best use for them wouldn't be to curate your own content, but to use them to see the world from other people's perspective. If you know in what "opinion cluster" someone lies and can predict their emotional reaction to some content, you may be able to identify the articles from cluster A that people from cluster B react the least negatively to, and vice versa. And this could be leveraged to break echo chambers, I think: imagine that article X is rated +10 by cluster A and -10 by cluster B, and article Y is rated +10 by cluster A but only -2 by cluster B. It might be a good idea to promote Y over X, because unlike X, Y represents the views of cluster A in a way that cluster B can understand, whereas X is probably some inflammatory rag.

The key is that you can't simply choose content according to a user's current preferences, they also have to be shown adversarial content so that they have all the information they need about what others think. This is how they can retain their agency. Show them stuff they disagree with, but that they can respect.

I expect that a system like the one I'm describing would naturally penalize content that paint people with opposing points of view as evil or idiots, because such content is the most likely to be very highly rated by the "smart" side and profoundly hated by the "stupid" side. Again, note that I'm not saying content everyone likes should be promoted, it's more like, we should promote the +10/-2 kind of polarization (well thought out opinion pieces that focus on ideas which might be unpopular or uncomfortable) over the +10/-10 kind of polarization (people who disagree with me are evil cretins).


In the right medium, perhaps the user agent would also decide when my posts are shown to people versus an ad being shown in place of my post such that I make money. Then a site like Facebook would only make a small portion of my ad revenue in exchange for hosting it.


I'm building something like that. My app learns based on the content you've read, to show you more relevant passages for you:

https://thinkerapp.com/


> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you.

So you can read more of what you already agree with? That's called living in a bubble. The mind cannot grow in a bubble.


Well that kind of already exists in a project called "huginn".

https://github.com/huginn/huginn


Nah, they’ve already filtered by your IP, geolocation, browser fingerprint, etc.

We need access to all the data so we can decide which algorithms to apply. There’s really no escaping that


bring back rss feeds

then I choose what I read in my reader


This already exists. Its called Facebook purity.

It obeys what you say. No cats? No problem. No babies? No problem.

And yes, it also does filter out ads.

FBP


"TiVo, but for the browser."


> AI that read every Facebook post

I doubt FB would let you do that. It's "their" content.


Sure, you can't read every facebook post, but if your browser extension is scanning your feed and suppressing posts for you, how can they even stop you?


It's a violation of copyright under current interpretations of the Copyright Act. Companies like FB are well aware of this and send C&Ds to this effect every day.


That is shocking. How could something like uBlock Origin or Privacy Badger ever exist? They're doing the exact same thing; modifying the status of the page payload. Even BrowserName developer tools would run afoul of this. I can't fathom how these are materially different.


These projects continue to exist because no company has felt it's in their interest to bring suit, I guess. This is the type of thing that companies like to keep quiet, and if something like uBlock Origin isn't a pervasive threat, they won't risk publicizing and potentially losing the loophole. In particular it would be dumb to sue the EFF for Privacy Badger, since part of the reason the EFF exists is to fight such things in court.

Look up the "RAM Copy Doctrine". Basically, it means that every time data is copied within the computer's memory, it's a potential infringement. The HTML source of the page you've downloaded undoubtedly qualifies for copyright protection. Your argument would be an implied license to modify the page to make it suitable for display, but standard ToS will typically forbid any type of modification, removing any ambiguity into whether third-party extensions that modify the page may have an implied license. Your license would typically be limited to displaying the page as transmitted, and allowing an extension to read and modify the page would be an infringement.

I'm not a lawyer but I had a neat little SaaS business that was killed by a C&D along these same lines. Multiple attorneys reviewed and this is basically the summary. When I suggested that we do a peer-to-peer data transmission thing to avoid handling or transmitting any copyrighted content, I was warned that doing so could easily be interpreted as conspiracy and that it'd be best not to go down that route. Maybe someone else's attorney would say something different, but that's what mine told me.


Not a new idea. One example was early marketing of Apache Nutch.


Or you could just be your own user agent. Problem solved!


Great idea, but who's gonna fund it?


Nice idea. Anyone working on this?


Yeah, since what the world needs is more echo chambers.


This already exists — most social media is already curated. You only see tweets and posts from those you follow or friend. You can already block or ignore any undesirables. This works fine for self-curation.

There is no need for holier-than-thou censorship short of legal breaches. Good to see FB take this change of direction.


Except when it doesn't.


Such as?


Twitter shows me garbage I don't like, as does YouTube. They do this with very little regard for who you follow nowadays and give you no say in what types of stuff you actually want to be recommended. Sometimes they're nice enough to say why they recommend something (which should be standard), but most of the time it's just infuriatingly stupid.

I'm not against machine curation at all, mind you. I want the infrequent poster to have a higher weighed voice and such. But I want to be able to control the parameters.


For Twitter use curated lists. This allows you to avoid "the algorithm".

You are bemoaning YouTube's discovery process, you need not watch what's "Up Next" — that's your choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: