Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would be interesting to learn how your (or any other) team defines borders of a microservice. Iow, how “micro” they are and in which aspect. I guess without these details it will be hard to reason about it.

At my last job we created a whole fleet of microservices instead of a single modular project/repo. Some of them required non-trivial dependencies. Some executed pretty long-running tasks or jobs for which network latency is insignificant and will remain so by design. Some were few pages long, some consisted of similar-purpose modules with shared parts factored out. But there was no or little processes like “ah, I’ll just ask M8 and it will ask M11 and it will check auth and refer to a database. I.e. no calls as trivial as foo(bar(baz())) but done all over the infra.



Did you have a common, centralized data store or did each and every microservice manage their own instance?

(Because this is at the same time one of the defining elements of this architecture... and the first one to be opted out when you actually start using it "for real").


Yes and no. The "central" part of data flowed naturally through services (i.e. passed in requests and webhooks, not in responses). Microservices maintained local state as well, though it was mostly small and disposable due to whole-intra crashonly design. For example, we didn't hesitate to shut something down or hot-fix it, except for bus-factor periods when there's only few of them. They could also go down by themselves or by upstream, and routers avoided them automatically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: