> The amount of friction to get privacy today is astounding
I don't understand this.
It's easy to get a local LLM running with a couple commands in the terminal. There are multiple local LLM runners to choose from.
This blog post introduces some additional tools for sandboxed code execution and browser automation, but you don't need those to get started with local LLMs.
There are multiple local options. This one is easy to start with: https://ollama.com/
I'd assume they are referring to being able to run your own workloads in a home-built system, rather then surrendering that ownership to the tech giants alone
Also you get a sort of complete privacy that the data never leaves your home too whereas at best you would have to trust the AI cloud providers that they are not training or storing that data.
This, in particular, is a big motivator and rewarding factor in getting local setup and working. Turning off the internet and seeing everything run end to end is a joy
We might be talking about two different things. Yes, under normal circumstances the setup steps involve software that defaults to using telemetry -- though I'd be surprised if it's not possible anymore to achieve those in an air-gapped env using e.g. offline installers, zipped repos and wheel files, etc.
My comment was referring to runtime workloads having no telemetry (because I unplugged the internet)
> whereas at best you would have to trust the AI cloud providers that they are not training or storing that data.
Yeah, about that. They even illegally torrented entire databases, hide their crawlers. Crawl entire newspaper archives without permission. They didn't respect the rights of big media companies. But they're going to respect the little guy's of course because it says to in the T&Cs. Uh-huh.
Also, openai already admitted that they do store "deleted" content and temporary chats.
I agree but I was just (repeating?) some argument that I heard that if the companies would actually not follow on their premise that they are actually safe if they said so (think amazon bedrock tos policy which says such)
Then it will cause an insane backlash and nobody would use the product. So it is in their interest to not train/record.
But yes I also agree with you. They are already torrenting :/ So pretty sure if they can do illegal stuff scott free, they might do this too idk,
And yeah this was why I was actually saying that local matters more tbh. You just get rid of such headache.
I don't understand: the translation is not good because he omitted the author's name? He stated it plainly in his article:
> As it happens, I recently translated a short story by Kir Bulychev — a Soviet science-fiction icon virtually unknown in the West.
I for one enjoyed reading it! As for the article, it's on point. There will be fewer historians and translators, but I suspect those that stick around will be greatly amplified.
I love this. This was me in 2017 and following the white rabbit led to a most spectacular year. It really is a birth right, and people chosing ‘no’ are really missing out on a fundamental aspect of being alive.
On another note, I always love psych threads on HN. High quality posts all around.
I feel a bit sad hearing this, I'm a pretty anxious person so I'll probably never experiment. While I love reading about people's experiences I often hear warnings that people like me should stay far far away.
You sound very much like me. Totally happy with my usage, and a chronic user that smokes 2-3 times per day reaching about a gram. It helps with empathy, it helps with creative flow, it helps with many things, but there are downsides of course. From August to December I took a break and it was immensely useful for perspective and for resetting my brain. My dreams were extremely vivid and rich, and going back to smoking was like rediscovering pot when I was 16. T breaks are indeed the key as GP pointed out.
I don't understand the appeal of using a cli to manage commits, branches, remotes, merges with conflicts, and so on. To me all these things are so much better internalized and understood when presented visually. Git GUIs are aplenty (Sublime Merge being my latest discovery, SourceTree before that) and generally really good. Combined with the already amazing GitHub web GUI, it's a wonder what use case is better served by sticking to the cli, other than this misplaced notion that it's what the cool kids are doing.
How about when you already do most of your development on the command line? In web development anyway, most frameworks, servers and tools are CLI only so you are already on the command line for the most part. It's far quicker to do a quick commit there and then rather then move to a GUI.
That said, I use both. Most of my commits etc. are on the CLI but I still switch to a GUI if I want to browse the repo or see more detailed diffs etc.
Exactly this. I use the GitHub CLI and website interchangably throughout the day. When making commit messages, I generally always link a commit to an issue. So the end of a commit message will generally say "Resolves #281" or "Fixes #433" or "See Issue #218" or whatever.
I usually need to double check open issues when making these messages and since I am already writing commits on the command line, having a split terminal or tab that can pull up active commits with a single line command and see them in a concise and clean format, it is super helpful and fast.
If you are already working in the command line all day long, then it makes perfect sense to have access to GitHub in the command line as well.
Another benefit is that most IDEs will have direct terminal access within the IDE, which means you can get relevant GitHub details about your project from within the IDE via the terminal, without needing to leave the IDE.
Lastly, the CLI version is far more concise and simple compared to the web version.
I am not hating the web version. I still use the web version plenty. But it is just another useful tool in the quiver that has plenty of use cases and simplifies workflow.
I've been doing `git push -u origin HEAD` and then using my mouse to click on the "go here to create a PR" link that gets printed, which isn't too bad. Takes you right to the page where you can review the changeset before opening the PR. If I'm not actually ready to open a PR by the time I push upstream, I just open a draft instead.
Yes, that's what I do as well. I also have the old hub frontend for git installed. The idea is that you simply alias git to hub and then you get a few extra commands.
My favorite is "git browse", which opens a browser for the current repo & branch. There are a few other commands that are probably useful but that I don't really do much with.
Do you run this command frequently enough to do an alias? I feel like it'd be better to just have the muscle memory of typing the actual command instead
I used to have a lot of git aliases but then I became dependent on those and forgot the actual git commands that were lying underneath. So whenever I went to a new system I'd have to port my dotfiles or look up the git docs for what the commands were. If a colleague asked me "how do I do X" I had no clue and had to look in my bashrc. Nowadays I only add them if I do it many many times per day. Like "git commit -a -m" for example. I try to keep things as close to stock as I can
This type of alias seems on the fence for me, if was like 10x a day I'd definitely be on board but a few times a day is a gray area
Yeah this is huge— I used to use `hub` for this one thing as well. The alternative is even worse if you cloned from a repo you don't have push access to, so your first step is going to the web UI, creating the fork, switching the remote on your clone, and then _finally_ pushing and creating the PR.
Official CLI tools make it much easier to build automation with shell scripts.
One example that comes to mind - we used hub to hack together a quick security feature that errors out our CI/CD pipeline (which runs shell scripts) if there are any open PRs that are labeled "security" (i.e. Dependabot opening a PR to update a vulnerable library), which forces developers to keep their dependencies up-to-date in order to deploy into production.
GUI tools don't work as part of CI/CD pipelines.
Libraries in higher-level languages (e.g. Python) force you to make sure that there's a library for every tool that you work with - if a specific tool is missing a Python library, then you have to deal with that yourself. Shell scripting is much more productive for gluing multiple sets of tooling together, as long as the shell scripts remain of a maintainable length.
There are a bunch of examples which I find very useful with the CLI: Creating new repos (gh repo create), opening the repo in the browser (gh repo view), checking out PRs when you have the ID (gh pr checkout ID), diffing the currently checkout PR against the base (gh pr diff).
It has nothing to do with being a cool kid. No two people are the same. How you best process information may not be how someone else best processes information. Neither is right or wrong they are just different Everyone deserves to have tools available which fit how their brain best processes information.
Better scripting / automation maybe? I don't have any specific example in mind right now, but being able to pipe things together with other utilities could make for interesting applications, and is easier than having to interact with a REST API.
I have an open source project that moved from using Jira to track issues and changes to Github. Before, when the project made a release, it would fetch info from Jira to create a changelog.
With this tool, I was able to do something similar using Github as the source of information instead[1].
I fetch the info using the gh tool, output the result into json, and use a python script to format the output which results in a decent looking automatically generated changelog[2].
It's not the most exciting thing, and I could have probably achieved it in a different way using the GraphQL API directly, but for the needs of the project this fit the bill and let me get on with the release.
Indeed, a project I work on has a release script which tags a release and all and currently uses a third party tool for the GitHub interactions. Now we can use the first party GitHub CLI commands instead.
The things you mentioned are better when presented visually. Visual representations leverage the visual processing system of the human brain, and for graph-like data are near-objectively superior to linear textual representations.
However, you can approximate visual representations of these things in a CLI - `git log --graph --pretty=oneline --decorate --abbrev-commit` will use ASCII characters to draw the DAG of commits.
Moreover, the GitHub web GUI is not "amazing". It's tolerable as far as web UIs go, but it's missing a lot of keyboard shortcuts (and none of them are customizable), has no built-in extensibility (the fact that you can inject your own scripts and stylesheets is a hack only at the display level, and an impractical one at that), EDIT: isn't scriptable, and is incredibly resource-intensive relative to a native application. The GitHub CLI allows you to manage the non-git parts of GitHub from the command-line (and make your own GitHub native client by extension) - which is desirable, given the above.
Not really sure what you're talking about. Not everything you automate on GitHub happens in the context of a CI pipeline like in GitHub Actions. There are many cases for small scripts where you just need to execute a couple of commands.
I've never felt the need to visualize git scenarios; they just aren't all that complicated most of the time, though I can see how it might be useful to open a graph viewer if you have a tricky merge to sort out.
For me, being able to just note down in a text file an exact trace of what I've done is the gist of why CLIs are superior for my use. How do you even begin to keep track of what you do in a GUI? You could record video, but the information density is way too low for it to be useful, and it's worthless for automation.
A lot of my stuff gets automated by me first doing stuff manually, recording what I do in a script, and next time just running said script. That's just flat out not possible with most GUI tools, and even if it were, it's too cumbersome to be worth doing.
Git GUIs simply don't provide the power/flexibility the CLI provides without introducing a UI with a gazillion options. I've tried many (GitKraken, Tower (paid for it too!), SourceTree, VS's integration) and I've always gone back to the CLI.
Depends on how well-designed the app is. GitUp has been my go-to for years, after trying almost every other app out there. It offers a bunch of powerful features exposed mostly through right-click menus and single-button shortcuts. The only reason I ever go to the command line (and I used to be a religious command-line-only Git guy) is because GitUp freezes on extremely large diffs, which one of my projects has a lot of.
GUIs are discoverability nightmares though, as you describe. CLIs are the only place you can do a “man git | grep thing”. On macOS there’s ⌘ + ?, but that has never covered all the content in a man entry for the related CLI, IME.
> What use case is better served by sticking to the cli?
Good question. Knowing the "why" of things is important. My answers:
1. Focus - Opening a browser and clicking around takes more patience. It tempts me to go update my company's internal documentation about something irrelevant to my current task.
A CLI lets you pipe things to grep which lets you focus on specific information you care about.
2. Memory - I can write aliases to help me remember my common workflows.
3. Automation - I can have a script check PRs for me. This reduces context switching and enables greater focus and productivity.
Speed and overhead are both areas that should get a lot more focus nowadays; the Github desktop app along with a ton of other applications are Electron / webapps, each one adding a number of always-active processes, complex and expensive rendering, script execution, and a ton of memory usage, which do add up over time. I mean a lot of developers will have two or three Electron apps open right now; Slack and / or Discord, VS Code, Notion, and a bunch more in their browser of choice.
1. Can be addressed by using a GUI which is not on the internet, for example SourceTree, the git interface in IntelliJ, probably many more.
2. Is often unimportant if the GUI tool can make common workflows one or two clicks.
3. Having the CLI tools to automate stuff is great, but is not necessarily the best way to have an interactive session with a repository.
Some of the things I like about graphical interfaces for Git:
- The information density is usually (depending on the program) great - I can see local branches, remote branches, commits and tree diagram for the selected branch, all in less space than a terminal usually takes up.
- A bunch of actions are available by right-clicking on a relevant item, so I don’t have to remember commands and command-line flags etc and I can just get on with things.
- Some actions like “show me the diff between these two commits” are SO much easier that they become a viable way of working.
a CLI is scriptable and automateable, so there's that at least. How about "find all related issues and close based on the commit message and push to production if it's an urgent fix" etc
I have to agree. I used the CLI for years until a coworker showed me Githubs desktop app. I was hesitant at first, but it makes managing multiple projects so much easier.
It's more about automation than about end-user (developer) UI. CLI gives you a simple way to automate tooling for your team's processes (simpler than writing code to call into REST API's), and it's code, so you can maintain it as code and not some set of configurations on a UI somewhere.
Probably because the GUIs are always missing something.
today I wanted to see the commit history and changes of a single file. I tried for 20 minutes to figure how to do that in sublime merge some of that searching online. Failed. Used the command line like I probably should have in the first place.
Besides the obvious advantages of command line user interfaces for a lot of use cases, I am not aware of any really good Git GUI. Can you name any for Linux or Mac OS? Commercial would be ok, as long as it can be installed locally and does not require a server.
Thanks, then I will give it a try, when I had looked on the web page the last time I got the impression it was a server based application and required an account on their servers.
We actually licensed SmartSVN, which was a life saver for its version tree display. For most operations, we tended to use the command line nevertheless.
On Linux I like using gitg. I think it is "good" in the sence that for the tasks it can do (mostly viewing history) the UI is well though out and simple to use. However, it can't do more advanced tasks so it might not be what you are asking for.
vscode with ‘git graph’ extension is the best git toolset ever.
My favorite feature is ctrl+click 2 nodes in the git tree and immediately see file level diff which I can explore in vscode’s diff viewer.
For work, we use git flow with github PR (which I do on github website) and always work in feature branches. I am able to navigate git like a pro, cherry picking etc as needed without a problem. It even works well with git submodules.
If something goes unexpected, git graph is the best tool to figure out what happened and to be able to repair it.
Also, vscode handles merge conflicts in a way I can actually understand and correct without it slowing me down.
I largely use the CLI and I still manage conflicts and diffs visually (in plain git), by triggering a visual diff application from the CLI when I prefer doing so.
The GIT cli has the option to use an external diff helper if you prefer.
I'm not sure this is the case! In fact, I strongly suspect that its inability to display logs or diffs correctly has contributed to the widespread developer confusion with respect to Git, especially since there are many people out there who think, implicitly if not explicitly, that Git and Github are one and the same.
You know how a lot of people think merge commits (an important keystone in how easy Git makes it to read and write meaningful history) are inherently "confusing" or "messy", and try to avoid creating them? Well, if you look at them in Github's awful log interface, they sort of are! This doesn't seem like the fault of those developers, and it's not Git's fault, since it ships with powerful command-line and GUI tools for making sense of things.