OP here, as you said and to confirm, I am not involved with the development of Berty in any way.
My understanding is also that Berty is not fully production ready yet, however I have been following the project for a while and they seem to be going into the right direction. Other HNers might also be interested and who knows, the project might grow faster with more people involved.
Thanks for explaining, I thought "Show HN" was for any app/project to be demonstrated, not necessarily affiliated with the poster. I just (re)read the rules and I am sorry for this mistake, maybe @dang can amend the title?
Citadel and Point72 stepped in and closed the hedge fund position ($3 billion) on GME [1]. The regulators are not moving/commenting just yet, I am curious to see the repercussions of this and if Reddit crashing the markets it the new norm.
> [Melvin Capital] was also closed by its owner, Citadel
No. It closed out its GME short. It then got a bailout from Point72 (f/k/a SAC) and Citadel. To my knowledge, the latter and Melvin are not affiliated.
Source on that? Even at this prices GME is ~$20b market cap in an exchange that represents ~$25t in market cap. I don't know that it really poses a systemic risk to anyone who didn't take out insanely risky short positions on the company in the first place
It's possible that they simple moved the positions to P72 or Cit's books so they could claim they no longer hold the position. The intent would be to encourage retailers to declare victory and sell. IANAL so I could be off, but if it were legal, it is definitely plausible.
> Social Capital’s Chamath Palihapitiya jumped into the controversial name, saying in a Tuesday tweet that he bought GameStop call options betting the stock will go higher. His tweet seemed to intensify the rally in the previous session. The stock ended the day 92% higher at $147.98.
This is growing into a bigger problem. The thought of Twitter influencers (example with E. Musk / Etsy, yesterday [1]) heavily moving the value of stocks with a simple tweet is not really reassuring. The SEC should start investigating properly into it, from personal benefits to bigger company wide fraud, the regulators need to act before this is the new norm.
It is good that the UK is trying to tackle the vitamin D deficiency problem that is affecting its population. However, it should probably not be done under the "covid prevention" banner, the paper suggesting that vitamin D might protect against it recently "earned an expression of concern":
This is not really a surprise, public health varies greatly even within one country/city. In the UK, there is a similar phenomenon between certain areas of Glasgow and the rest of the country / continent, it is called the "Glasgow effect" [1]. Although the gap is not as wide, "only" 7 years, it is explained by poverty, unbalanced population, low-quality estate and pollution.
"Amazon has a history of offering services that take advantage of the R&D efforts of others: for example, Amazon Elasticsearch Service, Amazon Managed Streaming for Apache Kafka,[...]"
And at the end of the article they promote how they themselves rely on other people hard work:
"TimescaleDB uses a dramatically different design principle: build on PostgreSQL. As noted previously, this allows TimescaleDB to inherit over 25 years of dedicated engineering effort that the entire PostgreSQL community has done to build a rock-solid database that supports millions of applications worldwide."
In the end, it sounds like they are doing exactly what Amazon is doing with open-source.
There's a big difference -- that's how the PostgreSQL community works. 2ndQuadrant (now EDB), EDB, Citus (now Microsoft) all add value to open source Postgres, contribute back to the community by bringing new features, new life, usecases, and of course committing changes upstream where possible. Timescale is actually on the more open side of that balance, with the licensing and the community version feature matrix.
Also, in this case, Timescale actually has a pretty forgiving license[0] as long as you are not a add-nothing-aaS-provider, perhaps more than it should be, which I've asked about before[1]. Even before that change was made, running just the community edition as a add-nothing-aaS-provider would have been an improvement on the status quo, given how soundly it thrashed some other solutions in the past (ex. Influx[2]) and what you can do it (promscale[3]).
I know it can't be all roses, nothing is, but I don't think they've put too many feet wrong so far.
[EDIT] - I should note that on the scale of "contributing" to Postgres, the scale heavily tips in favor of 2ndQuadrant, EDB, and Citus as obviously they have the most committers and core team members. All those companies are to be commended of course, they're making postgres work as businesses and keeping it free while also improving it.
This GitHub repository is just the database, what about the code that holds together their cloud resources? I was not able to find it.
They criticize AWS for making money on Elasticsearch for example, AWS is "taking advantage of the R&D efforts of others". So Amazon is making money on a "serverless" / cloud experience. At the same time, it is known that Amazon is contributing back to Elasticsearch [1]. To that regard, I find their business model really similar to the one AWS is relying upon.
There's no way that Elastic would accept the Open Distro features as a contribution to Elasticsearch because they include inferior versions of features already available in the commercially-licensed version of Elasticsearch and, in the case of the Search Guard code in Open Distro, code that Elastic alleges was lifted from existing commercially-licensed Elasticsearch features. AWS knows all this but offers it anyway as a PR stunt so they can say that they at least attempted to make contributions to Elasticsearch.
Huh? If I want to run someone else's software on my cloud, how I want to do that depends on my cloud. I want docs and binaries, not a full cloud configuration.
Some people would want k8s, some docker swarm or whatever, some an aws config, others ansible, etc etc etc.
If I want a managed service, I _want_ to pay for that. The price includes people responding to pages and fixing problems, as well as fiddling with configs.
And I'd much rather be paying that money to a small (relatively) open-source company than a behemoth like AWS.
> If I want a managed service, I _want_ to pay for that. The price includes people responding to pages and fixing problems, as well as fiddling with configs.
Is this not what S3 is? You are free to use Elasticsearch and handle everything yourself, but if you want a managed service, you can use AWS. They are "attacking" the S3 offering, I still struggle to see any difference with them hosting Postgres.
> And I'd much rather be paying that money to a small (relatively) open-source company than a behemoth like AWS.
I am 100% with you on this, I do not want to defend AWS nor do I want to promote their products. The vendor lock-in situation you are in when using AWS is pretty bad and quite scary... An yes, I agree that the way they monetize open-source software is questionable.
I think we (like many) stand on the shoulders of giants when it comes to software, but I’m not sure the comparison is quite apt.
Amazon primarily runs and monetizes closed-source, SaaS-only managed services.
TimescaleDB instead is implemented in the open as an extension to PostgreSQL, and enriches and benefits the broader PostgreSQL community by unlocking a new use case (time series). The Postgres extension framework exists very much for this purpose, for projects like TimescaleDB to contribute back without needing to “pollute” mainline with domain-specific features. Most of TimescaleDB’s code (and all development for the first few years) is Apache 2, and all features are free for anybody to self-manage.
Timescale is a freely available PostgreSQL extension built in the open, and anyone can contribute. That's one of the great things about PostgreSQL - it was built to be extensible and allow customization. YAY!
Again, you're free to use it for any project, all Community features, wherever you want. The only thing you can't do is run a DBaaS for TimescaleDB. Seems like a fair tradeoff, right?
The nice thing in this process of the oxidation of the metal, it is done with hydrogen and does not involve any emission of CO2. Effectively what they do is:
Electricity from renewables -> Hydrogen (electrolysis) -> (iron oxide to iron, in loop) -> heat -> iron oxide.
I would say this is a nice solution to the problem of storing hydrogen.
Wayland support is actually being actively worked on. It is done via "native" (no X widgets) GTK support. You can find it on:
https://github.com/masm11/emacs
The plan is to get that merged upstream at some point, you can find out more about it on the official mailing lists.
The GTK fork is great, but it’s so slow - especially on HiDPI displays.
I tried it out on my 4K monitor and I felt a noticeable increase in typing latency. It started to feel a lot more like VS Code.
Ultimately I was about to get HiDPI support working in Xwayland in Sway with a series of patches and I ran Lucid Emacs, which was much faster and made the latency increase go away entirely.
Oh thank you for the link, I've had to discard emacs when I've switched to sway/wayland and I can't wait finally having emacs available for my muscle memory. Magit changed my life
I've been doing exactly this for years with Revolut premium (and I'm sure many other "online banks" have the same service). It's like using a password manager, you don't have to worry about how your card details are stored anymore.
It is very neat although I got into strange situations a few times when I needed to prove I was the card owner for a refund or for insurance claims.
GUI file copy tools should be using O_DIRECT, or periodically calling f/sync(). An argument could also be made that the kernel write cache should have a size limit so that one-off write latency is masked, but very slow bulk I/O is not masked.
O_DIRECT seems like overkill, and the lack of write buffering could be a real detriment in some circumstances. Syncing at the end of each operation (from the user's perspective) should be the best mix of throughput and safety, but it makes it hard to do an accurate progress bar. Before the whole batch operation is finished, it may be useful to periodically use madvise or posix_fadvise to encourage the OS to flush the right data from the page cache—but I don't know if Linux really makes good use of those hints at the moment.
On really new kernels, it might work well to use io_uring to issue linked chains of read -> write -> fdatasync operations for everything the user wants to copy, and base the GUI's progress bar on the completion of those linked IO units. That will probably ensure the kernel has enough work enqueued to issue optimally large and aligned IOs to the underlying devices. (Also, any file management GUI really needs to be doing async IO to begin with, or at least on a separate thread. So adopting io_uring shouldn't be as big an issue as it would be for many other kinds of applications.)
Not always. If you're reading from a SSD and writing to a slow USB 2.0 flash drive, you could end up enqueuing in one second a volume of writes that will take the USB drive tens of seconds to sync(), leading to a very unresponsive progress bar. You almost have to do a TCP-like ramp up of block sizes until you discover where the bottleneck is.
Which distro/desktop? My standard ubuntu 18.04 with gnome and mounted through the file manager doesn't do this and copying to a slow USB drive is as glacial as it should be, but copying between internal drives is instant and hidden.
Default gnome on Ubuntu 20.04. How much free ram do you normally have? If you don't have enough to buffer the whole operation, then it's not a problem.
Now that I think about it, this might actually explain some bugs I've seen when copying multiple files. Copying one file seems to work but then copying a second the progress sits at 0%, it's probably waiting for the first transfer to sync.
My understanding is also that Berty is not fully production ready yet, however I have been following the project for a while and they seem to be going into the right direction. Other HNers might also be interested and who knows, the project might grow faster with more people involved.