There are hundreds of strains of Lactobacillus delbrueckii subsp. bulgaricus and Streptococcus thermophilus bacteria, that are used to make "sour milk" (or yogurt). I don't really care who invented what and when, I just enjoy the great variety of natural products, and I really hate the surrogates produced by the big brands like Danone and Nestle. At the end of the day, it is a matter of taste, but as a Bulgarian I prefer natural Bulgarian "кисело мялко" that could be found only in selected shops or in small villages around the country, because it is more sour, naturally thick, with >5% natural fat. There are similar products all over the Balkans and also in Iran, but they have different taste and texture, I guess depending on the different strains used for the fermentation and many other factors. I've traveled to more than 50 countries on 4 continents in the last 30 years, and I could say one thing - just don't consume the "branded" surrogates, that pretend to be "something", if you haven't tasted the real thing :-) I guess the same is true for every food - just last week I've purchased like 5 different brands of Dijon mustard in Dubai...OMG, none of them was even remotely close to what a Dijon mustard is...
* automate things in your daily life :- I spend sometime in office downloading many kinds of log file and analyzing it. So, I just built a small qt GUI that behaves like a MC clone in windows. It's not very nice looking, but improves day by day.
* hobby to useful apps :- like visitinh hackenews frequently, so make a GUI client for hn.
No matter what you do, if the requirement is not clear, you won't be able to finish it. If no one uses it, it will be useless and disapper into the crowd.
I've seen management push for outsourcing simply on the basis of having worked with a particular company/group before when there was no need/sense. It's about knowing the trade offs and making the right decision for the company as a whole not due to personal preference.
And many companies used to make outsourcing decisions based off of simple head count cost and didn't factor in the extra time/hours/delays due to integration/poor work/other factors.
AI won't make coders unemployed. It will simply create a new type of job, like tensor flow programmers. This is same as C++ programmers who some way generate assembly code by using a tool called compiler.
The first iteration, maybe not. But the end goal for a general AI is to have it close over - i.e. have an AI capable of programming itself. At this point you enter the realm of recursive self-improvement, with no humans necessary in the loop.
What do you mean by small scale? I think requests per second. Our Go-backed API has handled 90k rps without batting an eye. Or do you mean small scale as in team size? It is my understanding that Go was specifically designed to work for large organizations (cough Google cough).
Or do you mean single page JavaScript web sites? At which point Go would be the backing API? I don't see what qualifies Go for "small scale" vs "large scale."
Not akditer, but I think the key words are "self sufficient," i.e., the Go standard library provides a great deal of core functionality out of the box.
A larger project is more likely to require features provided by third-party packages.
A small scale means you won't use redis for in memory data storage. You will not need C++ to write heavy string processing and hash table lookup. Golang also doesn't scale well if you have some heavy processing involving trees(http://benchmarksgame.alioth.debian.org/u64q/binarytrees.htm...).
FWIW that benchmark is not reliable . Developers have submitted go programs that do far better, but they get rejected because they use custom memory allocators. This is despite the fact that C++ does precisely this with an arena allocator.
Go was not designed for the small scale, but the large:
- it has a sensible module system that makes compilation fast
- it's a simple language that encourages boring code - your coworkers will probably write code that works and that you can maintain
- Multithreading is a first class concept. Programs you build locally using typical patterns scale when you run them on massive multi-core servers
- the language is memory safe by default
C++ projects require experts to build, scale and maintain. Go is designed to give that capability to journeyman developers.
Yes C++ is generally going to be faster, but rarely do people talk about why that is. It usually comes down to: a smarter compiler, unsafe operations, or clever optimization. The first is legitimate, but rarely that significant. The second is a penalty that's usually worth keeping (you want bounds checking on arrays), and the third misses the point.
Sure the expert c++ developer could write faster code, but is that who you have? Are you going to take the time to do all that optimization work?
> Developers have submitted go programs that do far better, but they get rejected because they use custom memory allocators. This is despite the fact that C++ does precisely this with an arena allocator.
No, apr_pools.h was not custom written to make some programming language look better on a toy benchmark!