See the history of Reddit. It was manually curated sock puppets before bots were viable, and now that bots are viable, I strongly believe the bulk of comments and posts in the popular subreddits are bots (and many are run by reddit themselves).
One doesn't even need to use llms. I caught a bot which was cross-pollinating comments from youtube's comment section into reddit when that youtube video was shared on reddit.
How about a totally fake social media populated by llm bots and rake in VC moolah?
I've recently used a ruby script with a bunch of methods that I can configure with a few yaml files to generate a shell script which I can then run.
Each run generates a timestamped folder with not only the bash script, but also a copy of the configs used to generate it, and all the data that the commands in the bash script needs (json files). I find this style of generator a common pattern for commonly used but frequently tweaked scripts.
- I ask for a resource
- you give it to me
- any linked resources (stylesheets, scripts, images etc) are up to me to request
Therefore there is no "ethical" conundrum in blocking ads. The ad industry brought this on themselves by trying to push malware, spam and actively trying to make the web worse.
Agreed. Advert blocking wasn’t a necessity until adverts became intrusive, tracking and targeting became pervasive, and every site flooded with cookie banners.
I remember when AdWords was just a humble bar of contextual text links, absolutely manageable. Not so much the case now.
Think of what happens to a fluid when exposed to a vacuum.
The water surrounding the vacuum instantly vaporizes and creates bubbles of water vapor. Then these small vacuum bubbles collapse and create small shockwaves that create noise and damage the propeller/screw.
Uber and Lyft are betting that 7/8 threshold will never be hit organically, but they can buy a 7/8 share of the legislature ensuring that this measure can only develop in one direction.