When using browser-tools-mcp I can point to an element and ask Agent to "change color of this element to red". It would copy the part of the HTML and search the codebase for it.
I can imagine having an in browser, framework level tool would know exactly which controller and which template generated this element and could target it more precisely while using less tool calls and less tokens. I haven't tested it yet but that is my expectation.
1/ How much Coins per commit will somebody get? It's totally subjective and decided ex post. How do you compare a patch with 300 lines of code to a bugfix that touches just one character? Or to a comment that sheds light on an important issue?
2/ How do business owners submit profits? Even if they are totally honest people - we live in a complex world where it might be very difficult to separate profits of one App from another.
All in all it seems to me that this is a great way to exploit young developers that want to learn but don't know their value yet. The Coins system tries very hard to give impression that it is fair. It's not.
The fact that I don't own my contributions is a total show stopper for me.
I like the overall idea - just not this particular iteration of it.
I've used similar, standalone services and also built my own, but I'm excited to try Mixpanels' offering because it's already integrated into the tracking.
I am a programmer yet still I don't get what this is. If an app wants to send JSON to another app why not use HTTP for that. DNS will take care of the routing. The only thing that seems novel to me is their use of UDP which is faster then HTTP.
Is it to bypass DNS? With all the moves to control the internet around, then an encrypted peer-to-peer decentralised system that can deal with any kind of data would allow for DNS-less email / websites etc etc...
Though the main point of DNS is it manages a namespace, and you wouldn't want clashes, so websites would probably have to have urls like thttp://{some 64char hash}/
User A wants to send and receive messages with user B.
Both are identified only by their IP and Port pair.
User B does not know of user As intentions.
So user A send switch C an UDP packet asking for "hole-punching".
Switch C has this service where users behind NAT routers connect and signup to be contacted when someone wants to "hole-punch" to them.
Switch C sends a packet to B with info about A.
B sends A a packet and Bs router is now expecting packets from A. Has soon has A sends its first packet to B its router is open for packets from B. Now both A and B can send packets to each other without C.
DNS is not yet entirely out of the loop.
The main switch for telehash is telehash.org:42424
[...]End: a SHA-1 hash key (40 hex characters, 160-bits) stored in the global DHT and distributed between switches. Switches will distribute and look up ends in the global DHT that are important to them. TeleHash defines the SHA-1 hash of the external IP:PORT of a switch as part of the protocol. Applications built on top of TeleHash can add their own ends to the DHT like hashes of files, hashes of e-mail addresses or hashes of other application-specific values (e.g. user@chat).[...]
So if i understand this right, every packet has a protocol-defined +end and the application can add it's own end on top (i.e. an Email adress?).
OK, I was being a bit thick there, to look up a email address you presumably simply hash it (presumably with some kind of canonicalization) and use that as the key for your lookup for a matching +end.
I'm not sure, but this is what i think i get out of this. You really don't need an email, this could be a hash or whatever. i think of more apps or services use some protocol like this, there should be a unified why to "lookup" for an account or something.
For example, if you have a decentralized social network that is based on the same software, say "facespace", every of those software nodes would lookup for the same identifier. i guess...
I totally disagree with updating the UI BEFORE the request gets back. It's wrong for so many reasons. They all boil down to the fact that server state is independent from the client state.
The speed argument also doesn't hold. If requests take too long to process then you have either problem with your API (doing something synchronously on server side which should be done asynchronously, granularity problems,...) or your server is freaking slow. At worst a request should take under 100ms of pure server time. Add latency and you have 300ms.
a sync problem on the server can't be worked around on the client side. You would end up introducing complexity in an unstable, unaffordable and insecure client.
also, actions like filling a page with data from a db do require the client to wait for the server to complete.
It's always the eternal "Good for user" vs "Good for programrers". I.e. When creating a new language, one must make choice.. should it be easier to read/use for the coder? Or easier to code from the developer side.
And, if we look carefully in the past, it seems that it always start with the "Easy for coder first" -------> "Easy for user". For example, when the first examples of Ajax came out, it was really hacky and most programmers would have never believed what they'd see today.
So, I think that you are half right with the "introducing complexity in an unstable, unaffordable and insecure client." Maybe with the actual technology and framework, you are right. But I'm certain that in the following months/years, we'll go toward the road of a better UI.
And, I still believe that it's not as hard as people think to make UI update first and update later. 99.9% of the time, the server returns "ok" or something we already knew. In the last 0.01%, we have to choose if we really want to make it to 100%.. but in these rare case, a hard refresh is perfectly fine.
I think that sync'ing client and server state is a concept that most people do understand. For instance, the Dropbox UI clearly shows sync'ing between client and server. Mail apps show spinners to indicate messages being sent. Asynchronous UIs and their subtle cues have been around for quite a while.
I can imagine having an in browser, framework level tool would know exactly which controller and which template generated this element and could target it more precisely while using less tool calls and less tokens. I haven't tested it yet but that is my expectation.