I'm probably way too late for this thought to get any traction / discussion - but I have this weird feeling that openai screwed up and showed it's "cool new thing" too early, and publicly.
As much as it pains me to say this, I don't think the real money is in making this a service, or "the product." I think the real money is in using AI internally as a puzzle piece of your backend - ie. the secret sauce behind xyz product.
I'm being very narrow here, but you can only do so much integrating what openai has built into your products - eventually "everything" providing data from the same model brings "everything" to the same level. In contrast if you train and create your own models to make xyz do something specific, nobody knows how it was done, or it surely makes it a lot harder to kang.
I have zero proof, but I suspect Google for instance has models that would literally obliterate what openai has shown capability wise. They're probably not necessarily language models though. Again, nothing to stand on here but I doubt their search and analytics for example are driven by hard coded algorithms these days.
Bard may have been released sort of as a "psh, we've been there done that" when in reality they didn't, because they never planned to make the models they were/are working on "publicly" available to use. It makes me wonder if this is how Google has lead for some long with some areas - now openai sort of screwed it up for everyone by making it a service that can be integrated / adopted by nearly anyone.
The only people I guess that are really going to know are the devs working for these big orgs, and I'm sure that lock and key knowledge.
> I have zero proof, but I suspect Google for instance has models that would literally obliterate what openai has shown capability wise. They're probably not necessarily language models though. Again, nothing to stand on here but I doubt their search and analytics for example are driven by hard coded algorithms these days.
Then why is Bard so bad? Bard feels like GPT-2 or LLaMA 7B with no finetuning most of the time (I tried it two or three times over the course of a week and went back to ChatGPT)
From my perspective, Bard went from "literally didn't exist" to "released" over the course of about a month. GP seems correct in that it very much felt like something picked up off the shelf, slightly dusted off, and released. Is it as good as chatGPT? From my testing, no. Is it the pinnacle of what Google can create, given motivation? I'm pretty sure also no. In comparison to the state of all the research papers Google and Deepmind release, it definitely feels rushed. So I'd suggest not judging Google on its initial fast -follow project: either Google will come out with something compelling in the next 6mo or so, or we can conclude it really was leapfrogged and has fallen behind. But judging it now seems a bit too conveniently pessimistic, IMO.
(There's a legit chance Google will flub this, don't get me wrong. It's just too early to properly conclude one way or the other.)
Google is in a weird spot. I suspect they are capable of doing so much more, but there is a serious risk to cannibalizing their 99% revenue model (search), which they probably don't yet know if they can monetize in the same way.
Unfortunately for them, OpenAI has forced the question down their throat, which I think is exactly what they intended or at least hoped for.
well my thought is that they 'whipped it up' real quick to attempt to downplay it a bit. Did that backfire? Yeah, I think so. But personally, I think people are missing where the real money is. OpenAI will do great for awhile until every damn product and service is using it, and then it's a race to the bottom. But that's just like my opinion...
I dunno, the stock price hasn't really tanked, so it didn't really backfire that much and it allowed them to get a bit of feedback as they try to figure out how to maintain revenue.
What if they put out this insanely great model and people just stopped using search? That would be a backfire most likely.
You mean open ai put out a model and people stopped using search? Or Google?
What Open ai offers currently can't really compete with search - I understand the data it's being fed gets newer and newer, but it's not really real time like the search engines are. Indexing and presenting data is so different than NLM. Even if it is fed data that's new it's going to have to infer a lot because of a lack of history. It might be able to summarize recent events I guess. Way dumbed down here, but I consider chatgpt like a really smart encyclopedia that can search fast and stay in context across "searches."
If you meant Google, that's sort of what I'm saying - they wouldn't release something that could blow open ai out of the water. But I suspect what open ai offers as a product is something Google could've built long ago, or maybe did and couldn't figure out how to monetize it. They've instead invested in ai to make their products and services better, not as much to offer ai as a service.
What I meant by bard not working out so great was that Google quickly dusted off or slammed together some shenanigans to be relevant, even though what openai is doing doesn't appear to be a part of their master plan.
It can compete with search by integrating it as part of its internal workflow when answering the question, which is exactly what Bing and web browsing mode in ChatGPT already do.
And yeah, this means that it still needs the search engine. But it also means that ads are out of the picture for the user.
OpenAI couldn't do a google style many endpoints style product diversification. OpenAI wants to make big models that do stuff no one else's can, period. They aren't going to waste their time making, for example, a therapy chat bot, and a code helper, a semantic search platform, etc. and, curate, market, and bottle them all separately. If they wanted to do that and focused their business on that Google would easily beat them because they have much better inertia with all their existing services. It makes much more sense to let other companies just use their simple API to do so so they can focus on being the very best at the core models they offer.
The issue with Language models for the companies that create them is they are so general. If a company builds a language model into their backend, and another one comes out from a different company that's 1.3x as good, it would be trivial to switch to the better one. It's not like a company being so tied to AWS that they can't even fathom switching to another cloud platform because all their internal shit is built in to the thousand specific ways AWS works. As a result, to be competitive you need to be the very best in that realm.
They are a weird company, and their core motivations aren't money, though obviously they have to make money if they want to keep doing what they want to do.
Yes, the release was wild and disruptive, but when your core motivation isn't greed, you can do some seriously wild and disruptive things.
Why is this noteworthy? You can do the same thing with Android messages... Hopefully this is not another Twitter attack. This isn't a "vulnerability" either.
The behaviour everyone likely expects, in twitter & android, is that if you send a video to one person directly, then only that one specific person be able to access it.
It's different if the UI makes clear that you're uploading an image to a website where it will be publicly available, but random people "probably" won't find it, and you can share the link with someone.
Technically I agree - it's just one of those things that quite a few platforms do... It's similar to the eufy stuff circulated about recently. User uploads XYZ, they expect it to be "private" - platform devs decide private == obfuscated via a super long file name (a bit layman, sorry) in some kind of object storage.
While there's definitely a method of securing the access to the uploaded content to those who should have access, it's often not implemented that way since your uploaded content would be statistically improbable to "guess" and even more improbable to tie it back to you.
I came off a little direct, straight up saying it was not a vulnerability without context. While I still stand by it not being a vuln from a sec perspective, it's definitely not great.
Part of the issue with Eufy is that they uploaded people’s content even when cloud backup was off. They also had the video stream unencrypted. It accepts an authentication token but never actually enforces it.
You can also just do this with Circle posts, the permalink to the video is always just going to be avaliable, the client/server just prevents unauthorised people from retrieving content that displays that. While it wouldn't be too much to prevent twitter from protecting content, there are far greater security concerns if people are access the intentionally restricted content
I sort of explained my thought process above but I suspect they've done it this way for "cdn things"
It's not great, there's certainly a way to secure it, but like many other solutions - stuff it in a storage bucket with a "random" url is "good enough" in the eyes of the platform.
protection with opaque links may not be a best practice, but they are certainly not a security vulnerability. either the party willingly shares a link or needs to be compromised to get access to the content.
there's no remotely exploitable vulnerability. this isn't some auto increment id you could be hitting to see some content you were not intended to see. opaque links are unguessable.
> However, if the URLs are somehow leaked (e.g., guessing, reverse engineering, brute force, exported through HAR files, intercepted by proxies) ... but the DM videos are available for anyone to access with no HTTP protection
"guessing, reverse engineering and brute force" all depend on unproven or unexistent vulnerabilities. what is the point of even mentioning them?
"exported through HAR files, intercepted by proxies" these would imply that the attacker would have access to the data anyway.
I understand the likelyhood of a vulnerability and I agree with your assessment that it's unlikely a casual observer could generate a list of these URLs (a la parler leak) but disagree that this isn't broadly categorized as a vulnerability (it is, however academic or unlikely).
My question was why you felt that a security researcher publishing something they found must be because of some hatred for Elon Musk? What are the conditions where someone can identify something and share it without an ulterior motive being assumed? I understand you can't criticize right-wing darlings but is there anything else?
There's more to cs than passwords mate. Your view is ultra narrow minded, and shockingly selfish. Sure they don't pay you to "put up with security" but they also don't need to pay you at all
The poster you replied to is being realistic. Calling names doesn't change human nature. People in charge of security can work to build something that gets used properly, or they can nag and at best get people to go through the motions the absolutely have to. The latter options is what leads to findings like in the article
I'm not saying my view is the way things should be. However when it comes to security, you always need to assume the worst and plan accordingly. I'd rather assume the worst and be pleasantly surprised than the other way around.
I'm not a career dev, but I have inherited teams and projects before that were a huge mess...
This isn't going to come off nicely, but your assumption that it needs a full rewrite, is in my eyes a bigger problem than the current mess itself.
The "very junior" devs who are "resistant" to change are potentially like that in your view for a reason. Because of the cluster they deal with I suspect the resistance is more they spend most of their time doing it XYZ way because that's the way they know how to get it done without it taking even more time.
What it sounds like to me is that this business could utilize someone at the table who can can understand the past, current, and future business - and can tie those requirements in with the current environment with perhaps "modernizing" mixed in there.
Part of the reason some VPN providers (including myself in the past) lease the hardware is to be extremely scalable. Customer retention low because VPN doesn't work for xyz streaming service? "Check out our new endpoints here in this obscure tiny colo!" Want to spin up endpoints in Russia? Done! Africa? Done!
I have equipment I own in two colos - for other things. It made way more sense when I was selling VPN access to lease everything.
Most importantly, this boast Azure is making falls flat on its face unless they own everything. Even if they "own the servers" they're likely still colocated, being provided network hand-off, etc. ¯\_(ツ)_/¯