Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.
My usual workflow:
Agent 1 - Build feature
Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices
Agent 1 - Here's the code review for your changes, please fix
Agent 2 - Do another review
Agent 1 - Here's the code review for your changes, please fix
Repeat until testable, maybe throw in a full codebase review instead of just the feature.
Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.
Then update your .md directive files to tell the agents how to test.
Voila, you have an llm agent loop that will write decent code and get features out the door.
Looks great! If you have multiple AWS accounts in your org, you probably want to use something like aws-sso-util to populate your profiles so you can quickly swap between them
Honestly, Claude Code Yolo Mode with MCP Playwright and MCP Google Chrome Debug is already sudo on my system + Full Access to my Gmail and Google Workspace.
Also it can do 2 Factor Auth in its own.
Nothing bad ever happened. (+ Dropbox Backup + Time Machine + my whole home folder is git versioned and github backuped)
First it felt revolutionary until I realised I am propably just a few months to one year ahead of the curve.
AIs are so much better as desktop sysadmins, routine code and automating tasks, the idea that we users keep fulfilling this role into the future is laughable
AI Computer Use is inevitable. And already here (see my setup) just not wildly distributed.
Self driving cars are already here (see Waymo, not the Swasticar), computer use super easy in comparison.
Oh by the way, whenever Claude Code does something in my online banking, I still want to sign it myself. (But my stripe account I dont ever look at it any more, Claude Code does a much much better job there than I am interested in doing.)
CEO of Toptal here. If you like, I can ensure we review your profile and client matching history to see if there's anything we overlooked. I'm available on Slack or taso@toptal.com. We’ll see if we can optimize your visibility to clients needing backend/data optimization experts.
While we look into this, Opire (an open-source bounties site) has lots of short-term opportunities.
In just a few years, 30% of the US population will be responsible for electing 70% of the Senate.
How is that not "bullying"?
How does that play any role in an actual democratic nation, as opposed to some pre-Lincoln concept of the US a series of independent states merely "unioned"? The answer is obvious: it doesn't.
When my full-time job was as a journalist, I used youtube-dl all the time as a way to have archives of content that was either in danger of being taken down (related to a mass-shooting or other global event) or that the creator would take down after it received attention. In fact, I used to have instructions for how to install it in the various wikis/docs for other team members (this was easier once the Python GUI frontend became available, but I still created step-by-step instructions for specific settings).
I produce/host a few weekly news shows focused on developer-centric news and updates and have a keyboard macro setup to run youtube-dl against a text file with URLs not to download the videos, but the thumbnails for videos, so I can use them in the on-screen graphics when talking about a specific story. Before I scripted that solution, the process of having to manually extract the thumbnail for any video I was highlighting was a major PITA.
Also, being frank, youtube-dl is significantly better than the official YouTube API for downloading past content from channels I own/manage. It’s faster and a lot more scriptable. I have automated scripts set to watch specific playlists or channels and auto-download stuff for archival purposes — and again, this is content that I either own or that exists on a channel where I’m one of the admins.
It’s unfortunate that the test suite had links to commercial music — we’ve seen in past RIAA litigation that that is enough to go against the argument that this isn’t encouraging download of copyrighted content.
But as a tool, for not just YouTube but so many other services, it’s invaluable even in non-data hoarding/grey area or straight up infringement scenarios.
To be clear, I’ve often used youtube-dl to infringe (as have the vast majority of its users), but that’s my choice/fault. The tool itself has plenty of non-infringing uses. I just wish they’d either linked the test file to another area or used other examples.
> In an era of fakes and deepfakes, we need tools like youtube-dl more than ever.
I've been working on deepfake detection for the past two years. I use pytube3 and youtube-dl to scrape hours and hours of footage, not just youtube, but cspan, news sites, anything I can get my hands on. I have ~100 hours pulled so far, at multiple encoding rates.
Facebook recently changed their API and now the best I can scrape easily is 240p. That's insufficient to detect the artifacts the models need to train on. I can only pull 1/3rd of cspan videos I can watch through the viewer because again, api changes. It's a constant cat and mouse.
I'm not exaggerating when I say that if I don't have those wonderful folks out there keeping these tools and test suites up-to-date, this project is massively disadvantaged.
I don't have any public release assets at the moment, but you can read about Siwei Lyu and Hany Farid, who are leading the charge in deepfake detection.
I beefed up my browser cache to remember everything and have a server in the background downloading any youtube channel I visit. Now I know that anything I've ever looked at via the browser is now accessible offline.
Have you tried the neovim _integration_ VS Code extension? [0] It's not perfect (I'm still in a terminal 95% of the time, but my current VSCode set up is surprisingly not painful and I find myself using it more frequently) but it gets a lot further along than VsVim did for me (which is probably not a far comparison because I have not actually used VsVim in many years) or VSCodeVim when I tried it recently. YMMV, of course.
A word of advice, though, if you plan to check this out... I'd recommend maintaining a separate neovim install specifically for the VS Code integration and make sure that it has fairly basic vimrc with limited plugins. You have to take care with this because (neo)vim plugins don't always play nicely with VS Code and vice versa.
I'm sorry to say it, but don't bother with this. It's unfinished and not in depth. The most important Vim resource is "Practical Vim" by Drew Neil. If you haven't read it you shouldn't be using Vim.
Everyone’s already moved onto using service workers to implement third party cookies. I know because we are using it in production.
-edit-
This is how we use it:
We have two domains.
Our main site `widget.com` and `widgetusercontent.com`.
`widget.com` contains an iframe from `widgetusercontent.com`.
We have a service that runs on `widgetusercontent.com/service` that is controlled by us, but it needs to be authenticated with a temporary credential (doesn't matter if it gets leaked) and cannot run on the same domain as `widget.com` since it also contains user generated content.
We used to embed the URL `widgetusercontent.com/service?auth=$AUTH_COOKIE` on `widget.com` and have the `widgetusercontent.com/service` set the cookie, but this no longer works because this is third party cookie and has never worked on Safari.
The solution is to load `widgetusercontent.com/service` as a blank page containing only a service worker. We ask this page to load `widgetusercontent.com/service/sw.js?auth=$AUTH_COOKIE`. The service we control returns a service worker with the auth cookie embedded in it, and is set to rewrite every request under `widgetusercontent.com/service/*` and inject the $AUTH_COOKIE into the request header.
I came up with this myself but I assume others would have as well.
I don't know if this'll work for tracking and other nefarious stuff ad networks use but this is a legitimate use case for us.
Yes it works in Safari. Cookies don’t work but this does.
A gap in your resume is called a sabbatical. You spent a year doing something else that's more important to you because you can afford it. Medical plus some recovery time is a great reason. But just playing around with some new tech, investing in your education, or even spending time on your other interests, work for charity, or see something of the world, are great reasons. Slacking off may not sound great, but many employers are unlikely to care that much about the gap, and if they do, there's always a more interesting way to phrase it.
From (I think) an old Joshua Bloch talk on API design, paraphrased:
* If you generalise based on one example, you will get a flexible API that can handle only that example.
* If you generalise based on two examples, you will get a flexible API that can switch between those two examples.
* If you generalise based on three examples, you have a chance of abstracting over the common essence.
I work in this field so I'm incredibly biased: automated business solutions that cut entry-level data employees out of the equation. You save TONS on the bottom line, and cut out human-driven process that is error prone and difficult to manage. I'm talking about things beyond "API-driven dev", more in the realms of Puppeteer, Microsoft Office automation, screen-scraping (mouse/keyboard), etc. I make API's out of things that other devs balk at - and trust me, it has a lot of market value.
This isn't as "up and coming" as all of the other items people are mentioning, but I'd put it on a "always increasing in popularity" trajectory due to an ever-increasing need. It's not really sexy or interesting, but there will always be a HUGE market for the things that I can do =)
I will warn people that "up and coming" tech is often fad-based and has boom and bust cycles, and personally I'd rather be working for a paycheck then waiting to win the lottery in this regard.
Maybe this is not the answer you're looking for: but I think CRISPR-Cas9 seems like the most exciting technology probably in all of science. It's a system that can be used to literally change the DNA of living creatures, and on top of that it's highly accessible. Think of the hacker culture today and now imagine the same rate of change for biological engineering.
We'll have massive libraries of re-usable "components" for interesting DNA-sequences. People will slowly slice together more complicated features, and freely trade organic components with each other via post. At some point in the future electronics will catch up to bioengineering, leading to better ways to make changes to DNA. We'll eventually be able to change DNA in something like a "biology IDE" and have usable components printed out the other end. After that point, our world probably won't look anything like it does today.
It won't be long before someone decides to give themselves glowing skin or super strong muscles (and people have already tried the latter!) I for one welcome our super-human overlords.
Example of a book I published with this approach: https://www.amazon.com/dp/0692553916 (a fairly challenging type setting problem with lots of footnotes, foreign language, etc.)
The main gotchas are that if you really want it to look good you have to dig into the Latex template, so ultimately it would take more effort to make it push-button for non-technical users.
Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.
My usual workflow:
Agent 1 - Build feature Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices Agent 1 - Here's the code review for your changes, please fix Agent 2 - Do another review Agent 1 - Here's the code review for your changes, please fix
Repeat until testable, maybe throw in a full codebase review instead of just the feature.
Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.
Then update your .md directive files to tell the agents how to test.
Voila, you have an llm agent loop that will write decent code and get features out the door.