Hacker Newsnew | past | comments | ask | show | jobs | submit | nifoc's commentslogin

So it looks like they will be collecting PII (user_ipaddress) and they will also link these events to an account (account_uuid) and you just have to trust their "de-identification pipeline".

Will be interesting to see if they roll this out in the EU (especially with the "Share analytics" box being checked by default).


Yes, and it doesn't seem necessary to collect, to begin with, to meet their intended goals.


The EU's GDPR only requires explicit consent for data collection for the purposes of the processing of data that is personally identifiable and there are exemptions such as for operational reasons. For example if you make an HTTPS request to my server then of course I have your IP address. It's what I do with that personally identifiable information that determines whether it requires explicit consent or not. For example if I only use it for the purposes of ensuring operational security and destroy access logs after some limited time, then explicit consent isn't required.

Data collection for aggregate analysis that discards personally identifiable information in a non-recoverable way similarly does not require explicit consent.

Sounds to me like what they say they're doing is compliant, does not require explicit consent under the GDPR anyway, and therefore whether or not the checkbox defaults to checked or not is moot from the point of view of the GDPR.

I understand some people might not want to trust them, their processes or their competence regardless, but that's a matter that's outside the scope of the GDPR. The GDPR is about what they are doing and for what purposes, not whether you trust them.


I used to have a (long) list of posts/comments that they refused to remove after I reported them. Most of these were (at least to me) _very_ obvious cases of being against the TOS (and the law).

I messaged this list to the admins. I emailed it to their support team. Never got a reply. Not even support answered my email.

I truly believe they just don't care.


Whereas I had 3 accounts permanently suspended for calling someone an idiot on /r/idiotsincars for "harassing speech". I have other accounts, but they took out 1 old account and a squatted account. Like, really? For using the term "idiot" on a subreddit with that very word?

I have here, Masto, and a few other places that at least have mostly sane policies. All I know is that reddit is definitely on the decline. And this whole API debacle is going to be their own Digg V4 moment.


There is no sane middle ground on most of reddit. There are subs where you'll get reprimanded far quicker for "annoying mods" by bothering to report anything, and then there are the other subs that are so uptight and intense that your comments can only be fluff anything else gets slapped down for one of the vague rules it has. Good luck debating subreddit mods for their vague rules, you'll just annoy them and admins do not care in the slightest to resolve these petty things.

They've created systems that makes it obnoxious for everyone involved.

Tiny subs excluded, but at that point the form of reddit just doesn't suit smaller communities well. The way reddit sorts best, new, top, plus a bunch of obnoxious automod filters keeps smaller communities (even if "small" in this sense is 50000 followers) feeling absolutely dead.


Lol I had the sane thing happen, then when I asked the mods what happened in got reported to the admins for harrasment. There's no way tho discuss, just a brick wall. Wild been on that site an embarrassingly long time without issue



I very strongly feel like this really should be opt-in instead of opt-out. They seem to want to do things the right way, but having opt-out telemetry in a password manager is something I really do not want.


I agree with preferring the opt-out by default. My initial reaction to any opt-in default is to opt-out immediately and worry about why later if at all.

To further anonymize the data and increase user comfort, cycle the selected users daily so the same user's telemetry is not collected multiple days in a row. That limiting of data collection may also make it more acceptable to some users. For example, it may be more acceptable if telemetry data were collected no more frequently than one day per month or one day per two weeks.


We're definitely taking a close look at how folks will decide to participate (or not participate). As much as possible, we de-identify the data we’re gathering through this project. This data will help us prioritize our overall efforts for our customers. We are not looking to analyze data on individual users. - Ben, 1Password


Thanks for the input. I've shared it with our Product team. We’re actively exploring the specifics of how this experience will operate for a range of use cases. We’re fully committed to ongoing transparency, and will provide clear guidance once additional details are available from our research and development period. - Ben, 1Password


Surely they'd receive enough metrics from the folks opting in to the beta/pre releases. Why not tie telemetry to that? Or at least explain why that was thought about and dismissed. It's an obvious fair solution to me as a customer that's feeling a bit shafted into this.


Great question! Limiting telemetry to those who opt into beta/pre-releases limits our data to those already most likely to write in with feedback. With this initiative we’re hoping to learn more from other audiences — particularly those not inclined toward beta software. - Ben, 1Password


And since Bing rasing prices affects their price so much, I'd argue that Kagi isn't investing enough into their own crawler/index, so that they can ultimately bring down the price of search.

Instead they're integrating yet another third-party (OpenAI), thereby rasing the price even more and tying their price to third-party API pricing even more.


They have updated the FAQ section of the announcement to directly address this (very, very valid) point:

> Q. Didn’t you say in September that current subscibers will be grandfathered in?

> A. Yes we did say that in September. We are sorry that we have to walk back on that promise and we should have done a better job at communicating the pricing change debate that has been going on for over three months with our community.

> A lot of things changed in the meantime that we could not anticipate and predict, namely increase in search costs and popularization of generative AI which further increases the cost and making us lose even more money per user than before. That is not sustainable for a bootstrapped startup and we had to make the next best decision.

> The decisions made (the price change and cancellation of grandfathering) is exactly the necessary step to keep us in the business of search, aligning incentives between us and users, and keeping the best interest of our users in mind.

> We did the best we can, and we will still going to grandfather in everyone for up to a year on the old plan, and then on a special plan after that indefinitely (which still loses us money, just less). Discussion about this was long and hard and we made the best possible decision given our abilities and the circumstances.

This is so incredibly disappointing and basically confirms that we're (at least in part) paying for their AI experiments - something that I personally am not at all interested in.


There was a thread here [0] a month ago about the AI stuff at Kagi, my take was commenters were overall enthusiastic about it. At the time I said I wouldn't pay for it [1] and got a bunch of replies telling me how valuable it was.

I really want Kagi to succeed, I've been following the results of the pricing change closely because it seems like it's not going well and I don't want them to go out of business. I hope it will convince them to drop the AI stuff, or at least confirm that there's strong evidence it will pay off somehow, not just the hype that makes everybody fawn over llm stuff.

[0] https://news.ycombinator.com/item?id=34646389

[1] https://news.ycombinator.com/item?id=34647775


Back in September they promised this[1]:

> If such change to Individual plans is to occur, we plan to grandfather-in all early adopters (meaning all current and future paid customers, up until this change) allowing them to keep their existing subscription price as long as they don’t cancel it.

My guess is that they will focus on the "subscription _price_" wording. Technically the price didn't change, since you can still pay them $10. They "just" changed the terms.

[1] https://blog.kagi.com/status-update-first-three-months#futur...


It's unfortunate since this is not what I (most people?) think when we hear "grandfathering".

Also, why do we find this out over a blogpost. Where's the email saying "oh by the way the your subscription is changing in a drastic way"?


They sent out an email last night, to me at least


Huh, must be just me. Mine doesn't seem to be anywhere, I have the product email setting turned on, and I get their billing emails. Strange.


Mine was in spam with a pretty high spam score.


I'm a happy Kagi customer, but going from $10/mo to $25/mo seems like a very steep increase. (I realize that much of it is probably driven by Bing more than doubling the price for their API results.)

One thing that feels kind of disingenuous to me is the number of searches that "a normal user" does in a month. The blog post mentions it several times, but they always reference numbers provided by Google or DDG. I have a feeling that the numbers for their "tech-savvy and heavy users of search" are _way_ higher than the averages of Google and DDG.

At my current usage, I would have to go with the $25/mo plan once my current subscription is up.



The new `maybe ... end` looks nice.


At first sight it looks like Elixir's `with` expression.


It is very much that. The Erlang team have not been too proud to steal good ideas from Elixir. Elixir has been a good source of fresh thinking for the BEAM ecosystem which has helped both the Erlang and Elixir side.


The Erlang 'maybe' expression expands on what 'with' allows in Elixir, mostly because the 'with' construct allows a list of conditional patterns and then a general 'do' block, whereas the Erlang 'maybe' allows mixed types of expressions that can either be conditional patterns or any normal expression weaved in together at the top level.

It is therefore a bit more general than Elixir's 'with', and it would be interesting to see if the improvement could feed back into Elixir as well!

The initial inspiration for the 'maybe' expression was the monadic approach (Ok(T) | Error(T)) return types seen in Haskell and Rust, and the first EEP was closer to these by trying to mandate the usage of 'ok | {ok, T}' matches with implicit unwrapping.

For pragmatic reasons, we then changed the design to be closer to a general pattern matching, which forced the usage of 'else' clauses for safety reasons (which the EEP describes), and led us closer to Elixir's design, which I felt was inherently more risky in the first drafts (and therefore I now feel the Erlang design is riskier as well, albeit while being more idiomatic).

So while I did get inspiration from Elixir, and particularly its usage of the 'else' clause for safety reasons, it would possibly be reductionist to say that "the good ideas were stolen from Elixir." The good ideas were stolen from Elixir, but also from Rust, Haskell, OCaml, and various custom libraries, which have done a lot of interesting work in value-based error handling that shouldn't be papered over.

I still think these type-based approaches represent a significantly positive inspiration that we could ideally move closer to, if it were possible to magically transform existing code to match the stricter, cleaner, more composable patterns that they offer.

In the end I'm hoping the 'maybe' expression still provides significantly nicer experiences in setting up business logic conditions in everyday code for Erlang user, and it is of course impossible to deny that I got some of the form and design caveats from the work done in the Elixir language already :)

Also as a last caveat: I am not a member of the Erlang/OTP team. The design however was completed and refined with their participation (and they drove the final implementation whereas I did the proof of concept with Peer Stritzinger and wrote the initial EEP), but the stance expressed in my post here is mine and not the one of folks at Ericsson.


> the 'with' construct allows a list of conditional patterns and then a general 'do' block, whereas the Erlang 'maybe' allows mixed types of expressions that can either be conditional patterns or any normal expression weaved in together at the top level.

This seems slightly incorrect to me. You can write expressions in Elixir's with macro too, by simply swapping the arrow for an equals sign. For example, this is perfectly valid Elixir code:

    with {:ok, x} <- {:ok, "Example"},
         IO.puts(x),
         len = String.length(x) do
      IO.puts(len)
    end
Did you mean something else?


See https://news.ycombinator.com/item?id=31425298 for a response, since this is a duplicate. TL:DR; I had never seen it and had no idea it was possible because I don't recall seeing any documentation or post ever mentioning it! Ignorance on my part.


The with statement in Elixir already allows for abitrary expressions between the with and the do. I'm not sure what I'm missing here.


You're right. After all these years (and even writing a book that had Elixir snippets in it) I had never seen a single example showing it was possible and did not know it could do it.

Well there you go, I guess the pattern is equivalent but incidental.


That's good. I didn't come to the BEAM ecosystem from Ruby, so I prefer Erlang's syntax to Elixir's.


You’re not alone. I value the conciseness of Erlang.


I was the same initially, but now having onboarded multiple people on to Elixir projects, the more familiar type of syntax does ease the first week or so of getting your head round things.


I hope to think that it's just that the bar to make significant changes to Erlang is very high. Stability and backward compatibility over time is something worth valuing. For me, this is a property that makes Erlang a great ecosystem.


I think this shows the inverse. DESPITE being a language with a long history, and a proven track record of dependability, they are continuing to add novel improvements. This release includes a new core bit of language syntax, and the expansion of the JIT which is a massive performance boost now on the two most important platforms.

Turns out you can have stability and innovation!


I don't know what's going on over at 1Password, but some of their decisions/statements are really questionable. A month ago they dropped this[1] in response to the 1PW 8 beta feedback:

> I also wanted to respond to a specific part of @ShakataGaNai's original post about the multiple passwords. We've actually been recommending folks use the same password for each of their 1Password accounts. This might sound ironic given that the typical advice w.r.t. passwords is to use a unique password for everything. The difference is that your 1Password account password is intended to be the one password you remember, and so in theory, if you can only dedicate so much brain space to passwords, if you use only one password for all of your 1Password accounts, you'll be able to make that password stronger than if you have to remember multiple account passwords. So part of the new behavior encourages folks that direction.

More context can be found here[2].

[1] https://1password.community/discussion/comment/609753/#Comme...

[2] https://1password.community/discussion/122614/two-accounts-n...


This comment is pretty misleading. You're making it sound like they advocated for using the same master password for all accounts, while the post you linked (#2) is about changing 1P (since 8.x) to NOT unlock all accounts with one master password/biometric. (1P<=7.x behavior)

The OP in that thread is complaining that he has to unlock each account separately with its own password. That response is a suggestion to mitigate password fatigue with multiple accounts and restore the same functionality as 1P7.

And as @gmemstr said, each 1Password Account also has a randomized "account key" mixed with your master password, making password stuffing attacks impossible. Your account key is given at signup and manually saved by the user. If you want to add a new device, you need to pull the key from an enrolled device or wherever you wrote it down.


To be fair to 1Pass, accounts also have a unique random account ID that is used in combination with email + password. But it does still kind of make you wonder...


That's... insane. So many breaches come from password stuffing attacks from leaked data. It doesn't matter how strong your password is if it's been compromised on another site.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: