Hacker Newsnew | past | comments | ask | show | jobs | submit | simonkafan's commentslogin

Ah I see it now: "prefix=true" is the property that apparently includes "networking" when searched for "network"


Not for "home network": https://ibb.co/59yFDT5


It would already help if the rules of the app stores were decided by a parliament. And kicking a provider out of the store requires a court ruling.


That would make legit ban very slow and ineffective tho


It would be the downfall of the Google empire (and probably half of the Internet) if the majority of companies realized that most of their online/mobile ad budget is absolutely wasted. I don't know of any other industry where people put so much money into it only to have so little tangible evidence at the end of the day of what they really paid for ("look, X real people clicked on your ad.... at least that's what we tell you and whatever that means for you.").


My basic assumption is that most people at the big IT corps either just want to a) make money or b) work on interesting things. I don't think the majority there enjoy torturing colleagues or subordinates (although it certainly can happen). That being said, the story to me sounds entirely or at least partially made up. In this story, everyone at Apple seems to have only the goal of psychologically abusing OP. Especially the passage "the note section of a hidden slide on a deck that she had uploaded and it was an indirect suicide/murder threat" sounds absolutely implausible, why would someone put a death threat in a slide deck which clearly documents said threat?

My serious (and absolutely not mean-spirited) advice to OP: see a psychologist and talk to him about it, especially about all the "hidden signs" you supposedly received from colleagues.


You must not be social enough to have met these people, but I can for sure think of people who I knew (but don’t currently work with) who would do these kinds of things. Maybe they didn’t start out to be mean at the start but when the author started commenting about shoddy work and how the team was manipulating data, they probably saw her as trying to ruin a good thing they had going (which would probably threaten all their prospects of career progression). From their perspective they might’ve just saw her criticisms as her maliciously trying to get ahead or as an attempt to showoff her technical superiority by pointing out their mistakes (which may put the manager always correcting typos into perspective as well).


I still don't see or have read about a useful use case for GPT-3. Maybe for procedurally generated chitchat between NPC in video games or an advanced Lorem ipsum generator but that's about it.


Yes. People were literally claiming to be able to write code using English language descriptions. This guy on Twitter generated a ton of hype with these cherry picked examples...even from people who should know better like Eliezer Yudkowsky.

https://twitter.com/sharifshameem/status/1284095222939451393...


Also interesting to note that the hype came a while after the release of GPT-3 (the tweet was about a month after), and that many of the examples could have possibly been generated from GPT-2 (which didn't get as much hype). Put another way, you could show ppl GPT-2 examples from a year ago and claim they were from GPT-3, and I don't think many would know the difference.

I do think GPT-3 has shown improvement, and it's a step forward, but it probably tells more about how humans interpret rather than how AI might work. Wrote about it more in the link below:

https://avoidboringpeople.substack.com/p/doctor-gpt-3


GPT-3 says: "Well, it's not intended for that. It's more for meaningful conversations like you have with people in real life. Look, use cases are not my area of expertise, you'll have to talk to the project lead about that."

GPT-3's use case is fun.


Auto-formatting punctuation - or syntax. You can show it a few examples and it will insert punctuation into raw words coming in from speech recognition, or convert badly formatted Python into well-formatted Python. It can correct syntax errors in C++ (!).

More generally, it can convert text from a simple, unstructured form into more complex, more structured form, with just a few example transforms. It's astonishingly good at it, but I haven't seen much on the public net about it. I don't know why people aren't exploring it more tbh.


Now that it is a captive technology, the question becomes why should anyone want to explore it other than Microsoft.


It's not captive. It's quite replicable (and impossible to monopolise), it's just expensive.


I’m the founder of copy.ai, a GPT-3 app that generates short form marketing copy. We have paying users already who find it helpful.


A scathing critique of both GPT-3 and marketing copy


This is unjustly harsh. At least it's got tangible benefits and customers without harming a soul. GPT-3 can be and is likely already weaponised, it's nice to see anyone out there using it for humble things rather than straight-up digital warfare.

I'm hoping we can finish some of Tolkien's unpublished books (Please give me beta access OpenAI heh)


> This is unjustly harsh. At least it's got tangible benefits and customers without harming a soul. GPT-3 can be and is likely already weaponised, it's nice to see anyone out there using it for humble things rather than straight-up digital warfare.

> I'm hoping we can finish some of Tolkien's unpublished books (Please give me beta access OpenAI heh)

> [...] GPT-3 can be and is likely already weaponised, it's nice to see anyone out there using it for humble things rather than straight-up digital warfare.

Well, not that I necessarily agree with the decision (I'm undecided), but one point in favor of only giving access to the model via an API rather than releasing it, is that the use can be monitored and hopefully "weaponization" will be either noticed or detected[0].

It would be pretty interesting if OpenAI eventually provided an external "ML API abuse monitoring" service, but one problem I haven't figured out a solution to is that when providing such a service, the goals of "reducing harmful use" and of "slowing the arms race"[1] (which are both valid goals) are somewhat opposed. I'm still thinking about that.[2]

[0] Using ML to detect harmful uses of an ML API would be a quite interesting research topic, and quite in line with OpenAI's stated mission, but gathering the necessary data set for training purposes may require allowing harmful use of the API in the first place (though I have thought of a few ways to mitigate those risks and limit the actual harm).

[1] http://xkcd.com/810/ is amusing, but misses the point. Spammers aren't just attempting to evade filters, but also trying to accomplish their goal, so inadvertently training a bot capable of writing an apparently constructive comment that successfully sneaks a spam link through to be clicked on and/or indexed isn't exactly a win for the good guys.

[2] I wouldn't be surprised in the least if a GAN-like social deception arms-race (especially since the same networks have to serve as both generator and discriminator) was the proximate cause of the Upper Paleolithic Revolution.


Oof, such an awesome technology whose use is going for something like marketing copy. I'm sure it pays the bills for you, but is kind of gross overall.


It's the AT-5000 auto-dialer all over again.


In the future, the AI will persuade who to vote for in coming elections.


Yes, if only they put their efforts towards generating low effort, self-righteous posts on message boards.

Marketing isn't evil, and it isn't the end of the world.


Of course marketing isn't evil, and I didn't say that. Marketing can be quite awesome and impressively done. However, you know AI/GPT-3-based writing and helping of marketing copy will lead to lowest common denominator scaling of marketing copy to everyone with tweaks for the specifically targeted person. My opinion is that that's as gross as ML people at Google optimizing for every ad click with slight tweaks to maximize revenue at any cost. Gross.

As to your dig, my conscience is clear; I made a conscious effort to avoid such ends in my professional life, sometimes to my short term detriment. My personal time is mine to do with what I please, even if you consider some of that time to be used in a self righteous or low effort way. It's not self righteous to point out bad things.


Not the same story but related: Microsoft had to rebrand his online storage SkyDrive to OneDrive once because the television broadcaster "Sky" won a trademark lawsuit in UK.

Can anyone tell me how this is possible? Why am I not allowed to use the word "sky" in any product name completely unrelated to TV?


Sky is also an internet service provider. Not as far removed as TV really, and SkyDrive is hardly unrelated to internet service.


I'd say it doesn't seem to be a problem for Box and Dropbox, but it probably is; the two companies are just too similar.


Average employee tenure at Google is a little over 3 years so it doesn't seem to be the ultimate goal in life.


One good thing is, you can officially use it as an API¹. I used it to automatically search for training images for an algorithm.

¹ https://azure.microsoft.com/en-us/services/cognitive-service...


+1 for their low cost search APIs

Years ago, I used Google’s search API (written by Nelson Minor). When they canceled the service, I started using Bing search APIs, and never looked back.


Using chatbots to solve support/sales problems is like solving the problem of autonomous driving by putting a humanoid robot behind the wheel: It's unnecessary complex overhead, just stop that!

If I communicate with a human, sure natural language is the way to go. If I communicate with a computer, I want clear, straight facts just one mouse click away. I don't want to text my way through the bot database, I don't want to add language overhead when there is no real human on the other side.

Am I the only one?


I'm not sure. Using a chatbot to triage support, or perhaps even actually provide first, or even second line support, is probably going to make for a net saving at scale. even if it's slightly leaky.

Would be very careful about where to use it though. You definitely wouldn't want to deploy such a product in a critical service context, without some kind of human marshal.

Something comparatively trivial though. Perhaps like, a super intuitive insurance comparison widget, that takes natural language as an input.

Sure, some joker will go off piste, and try to jerk the bot, but in those circumstances, customers are there for a reason.

There are of course, already products that do the insurance comparison thing, but there are a few markets, with a similar dynamic, that need a little bit more than good UX to coax requirements and data out of users.

Something like this could probably do it.


Right, most T1 support is using a template or script anyway. If I was building a better chat-support system, I'd create some sort of score that predicted the likelihood a generated answer was relevant to the question asked. If the score was low enough, present the question to a human instead. The redirection to a human could even be transparent to the user. And once the human answered the question, they wouldn't need to update a knowledgebase or a template, it could just be learned by the system. This could enable a lot of efficiency, and I don't think it would hurt the level of support a user is receiving, even if only because the current human-based support is so often just the human choosing which canned-response to use anyway. Replacing T2 or T3 support is not imminent.


Why would a human that isn't related to the business but working from a script perform better than an algorithm?

Serious question. If you had a line to the CEO, or someone with actual power in the organisation, then sure I understand why connecting callers there would help. Any business that does that?

Because of this reason I actually prefer (voice) chat bots over actual people. It's clear you're working against a script and you don't feel bad dealing with them.


<RANT> I despise telephone natural language support systems, most are inefficient and because I'm skeptical, I assume they're inefficient in order to further train their voice model and develop it as an incidental asset. </RANT>

I'm interested in any paradigm which can make searching knowledge bases more _fuzzy_.

I'm interested to discover if a chat bot can _orient me to a collection_ faster than I can guess what's there.


Dealing with chatbots and natural language interfaces in general feels like trying to make your grandma with dementia do work. For the things that she can actually help, you don't need her help and for the things that you do need help you need to understand the nature of her condition so that you can explain your problem to get a template response like a phone number to call.


I think chatbot as customer service agent is a great idea. I don't want to wait 2 hours for customer support that just reads off a script anyway. Am I the only one? That being said I don't think we're there yet


Definitely not the only one. I hate support chatbots so much. They’re rarely useful, but even when they are, I hate using them. No thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: