> If the SSH connection is set to disallow passwords and only authorize via SSH keys, how big of a risk is this
low risk, do this. Keys (ed25519,4096 rsa) are impractical to brute force. However I'd also recommend:
- use a different port than 22 (add your .ssh/config for easier UX if needed) - port 22 can get incredibly noisy with tons of bots probing
- disable passwordAuth, disable PermitRootLogin - use a normal user with sudo for your ssh
- consider a vpn please - I use tailscale, but I hear headscale is good - then use UFW to only allow SSH from the tailscale network (I generally allow all network on tailscale). Tailscale wrote a guide on this here [1]
- do not add and forget authorized_keys from machines you arent using
- I'm especially worried about how people keep giving Clawdbot/Openclaw access to all their machines, key auth means the machine is authorized on your server
- For new servers I often just add all my public keys to them (github lists all your keys at github.com/GH_USERNAME.keys
Thanks a lot for the detailed response. I see Tailscale pop up here often and have been meaning to better understand how it could fit into my typical hosting setup, so I appreciate that reference.
For additional context I usually host on a shared or dedicated VPS, and in this case am managing a WordPress site I inherited. It seems to me that if the SSH connection is restricted by IP and limited to keys, there are much larger risks involved in hosting a WordPress site publicly available on the internet w/ dozens of plugin dependencies.
Many people keep offering advice to consider a VPN and while VPN is very usefull, I have not yet come accross a reason why not use ssh auth. Like what can actually happen? From my pov the risk of running all sorts of userspace software with internet access is much greater, even without port forwarding.
> key auth means the machine is authorized on your server
Not necessarily: Depends on whether your key is passphrase-protected and how your SSH agent is configured (if you use one). You can have the standard OpenSSH one ask you for confirmation of every key usage, for example.
> consider a vpn please
But also consider how you'll fix a broken VPN without SSH access.
Received 15+ in 10mins on a public email (dropbox, soundcloud, gitlab, tidelift etc). Then just started hitting handles on the domain ( diddy@, epstein@ ). Just placing an aggressive block for "Activate account" and "zendesk" in content for now
Also Kenyan, I once recently spent 10min explaining a technical topic via chat, and the response I got was "was this GPT?". I took a few minutes then just linked an article of how underpaid Kenyans trained ChatGPT for OpenAI [1]
I have been thinking around solving this problem. I think one of the reasons some AI assistants shine vs others is how they can reduce the amount of context the LLM needs to work with using in-built tools. I think there's room to democratize these capabilities. One such capability is allowing the LLMs to directly work with the embeddings.
I wrote an MCP server directory-indexer[1] for this (self-hosted indexing mcp server). The goal being indexing any directories you want your AI to know about and gives the it MCP tools to search through the embeddings etc. While an agentic grep may be valuable, when working with tons of files with similar topics (like customer cases, technical docs), pre-processed embeddings have proven valuable for me. One reason I really like it is that it democratizes my data and documents: giving consistent results when working with different AI assistants - the alternative being vastly different results based on the in-built capabilities of the coding assistants. Another being having access to you "knowledge" from any project you're on. Though since this is selfhosted, I use nomic-embed-text for the embedding which has been sufficient for most use cases.
I got diagnosed with type 1 diabetes in Feb (technically LADA as it's late onset). I'm the first in my family with it so I had zero info on it. I tried getting some CGMs to use but most don't work in Kenya as they are geo-locked, and even apps for measuring carbs like CalorieKing are not available in my region. I was really frustrated with the tech ecosystem, and started working on My Sukari as a platform of free tools for diabetics.
I mostly get time to work on it on the weekends, so it's not yet ready for public use, but I've fully fleshed out one of the main features: Sugar Dashboard - A dashboard that visualises your Glucose data and helps you easier analyse it.
I'm really passionate about this and getting as much free, practical tools in the hands of patients (it honestly shouldn't be this hard to manage a disease)
I used to work for Lark. They raised $140mm to solve this problem and the best they could do was a non-ai chatbot that whined at users for not eating enough vegetables. The Lark app has 100% user drop off in 60 days and yet is still the silicon darling in the diabetes space.
Your platform has more science & more solution than 100 engineers in 3 years could produce. Keep at it and know with confidence that there is great value in what you are building. I know it's not your primary goal, but this will be lucrative if you keep going. I wish you a lot of luck, this is very cool!
All types.
The sugar dashboard allows import of data from different glucose apps, so its goal is to allow you visualize and analyze your data. I hope to integrate with cgms directly if I get some that allow it, and also source from Health connect. Sharing with specific people eg doctor is also a big ask that I'm working on.
The other WIP tools will be fore general health, not just diabetes, like carb counting from a photo via AI
Also recently diagnosed and just open sourced how I'm using AI to count carbs + get insulin doses [1]. Biggest issues I've seen to starting a legit business is not having sanctioned access to real-time blood sugar values (the APIs are all one hour behind), and dealing with the FDA. Love the idea of more tech-enabled diabetes management, good luck!
Love this! Thank you for sharing!
My backend is also in Go so this is a godsend. Will see how I can incorporate and let you know if I do!
> not having sanctioned access to real-time blood sugar values (the APIs are all one hour behind)
Ah, I didn't know this. One of the prospective tools I had in mind was real time alerting in case of drastic drops eg ping doctor or relative. I think will have to be limited to the apps/tools that do support realtime.
Technically there is unsanctioned access (someone reverse engineered the real-time APIs [1] which I ported to Go). I think the FDA does not want easy access to real-time values so that folks can't easily recommend insulin dosing without oversight. I am personally of the opinion that it is our right to have programmatic access to the real-time data and do with it what we please.
Would love to get in touch to hear more about your long-term vision for the project!
Insulin is lethal at higher dosages, so there is definitely an argument. My counter would be that someone who has to self administer this drug 5+ times a day should have the right to make determinations about dosing
yes, came across xdrip+ when looking for an android app I could use for Libre 2. I don't think Dexcoms are sold in Kenya, and even the Libres around are UK ones so you need 1) a VPN to setup, 2) an iphone. Both things being a challenge for most - I had to buy a my first ever iphone for this. Anyway, found xdrip a bit of a challenge to setup and a bit too technical to suggest to others; needs sideload and manually disabling a lot of Android defaults.
I had a lot of success with Juggluco[1] which is available on the Play Store and provides easy to use APIs to interact with supported CGM readings. Juggluco has an inbuilt xdrip web server but I haven't tried it yet.
Thanks!
I started out with a Nextjs full stack on Vercel, with db on Turso but ended up with a React frontend (next on vercel) and Go backend (selfhosted on vps).
Decided to port the backend to Go + postgres (on a Hetzner VPS), and retain the frontend on Nextjs - A lighter weight client, moving most of the compute to the backend API.
Few reasons for the port: I've had a lot more success/stability with Go backends, Turso pulled multi-tenant dbs which is what I mostly wanted them for, Nextjs is getting too hard for me.
Go backend is just the std lib (1.22+ server with the nice routing) - I mostly write all the lines in this
Frontend is textbook modern react: React19,next15,tailwind4
- AI mostly writes the code in the frontend (Cursor + Cline + sequentialthinking + context7 + my own custom "memory bank" process of breaking down tasks). AI is really, really good at this. I wrote this https://image-assets.etelej.com/ in literally 2 days 2 weekends ago with less than 10% of code being mine (mostly infra + hono APIs)
> the only info we do store is your session's cookies (and only if you're logged in).
If this truly is the only cookie you store, then you may not need the cookie banner, you can explain the cookie usage in your Privacy Policy.
For gdpr[1], strictly necessary cookies, like for login as you describe, do not require consent to be obtain as long as their usage is explained like in your privacy policy.
This brings back so many fond memories. I grew up in a rural part of Kenya where the internet was scarce and tech practically non-existent. I was interested in web dev and taught myself PHP using HTTrack to download the php manual site, then the cprogramming.com website. I remember writing these site contents onto a thick notebook to read in school. Cprogamming.com imho was my programming foundation as I treated it as programming gospel. That kid back then would be shocked at how far I've come, now a dev at MS. Not sure how I came across httrack back then but I am so glad I did
Logging in shows a dashboard asking you to implement it on your site.
That's not what I expect from a "demo" of a captcha.
A recommendation would be; please use mcaptcha on the mcaptcha account sign up page at the least. It'd then provide an instant user UX demo which is one of the major pains of captchas.
Or at least provide a link to the widget[1] for UX demo, where inspecting networks calls also shows the api calls in action
A tip for going over github code is to use the github.dev domain. This will help going over the code in an IDE (if you'd like that), that provides you language features, and easier navigation (eg Go To/ Peek Definitions, or References). It also helps avoid to clone a large repo to do the same locally.
In case you're trying to open the RFC links on the site, the IETF site no longer supports http and does not redirect to https, hence you see a 404 error. You can manually open https versions of the urls to visit them
low risk, do this. Keys (ed25519,4096 rsa) are impractical to brute force. However I'd also recommend:
- use a different port than 22 (add your .ssh/config for easier UX if needed) - port 22 can get incredibly noisy with tons of bots probing
- disable passwordAuth, disable PermitRootLogin - use a normal user with sudo for your ssh
- consider a vpn please - I use tailscale, but I hear headscale is good - then use UFW to only allow SSH from the tailscale network (I generally allow all network on tailscale). Tailscale wrote a guide on this here [1]
- do not add and forget authorized_keys from machines you arent using
- I'm especially worried about how people keep giving Clawdbot/Openclaw access to all their machines, key auth means the machine is authorized on your server
- For new servers I often just add all my public keys to them (github lists all your keys at github.com/GH_USERNAME.keys
1: https://tailscale.com/docs/how-to/secure-ubuntu-server-with-...