Hacker Newsnew | past | comments | ask | show | jobs | submit | rooftopzen's commentslogin


She need to update her LinkedIn profile then, now might have one year fully


I've spent about an hour a week on this since Jan. Traced a large % of bogus news stories this year back to Reuters (fwiw) before they are picked up by other outlets and spread.

I've found legitimate stories also sourced from Reuters, but haven't found illegitimate stories NOT sourced from Reuters (in other words, they seem to originate from the same source, not sure why)


Sorry but your concept of AI is marketing driven. It's probabilistic, understanding is past your pay grade.


They're actually right in that there are several attempts to create automated labs to speed up the physical part. But in reality there are only a handful and they are very very narrowly scoped.

But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body.


Exactly - AI allows for intersections in concepts from training data; up to the user to make sense of it. Thanks for stating this (I end up repeating same thing in every conversation, but is common sense).


Sam Altman has been a joke for awhile now, heard only his investors defend him for their next round increase - is that who you are?


Dove deep into this - 25+ security issues; no thx


I agree w post, and relevancy from its original date in 2020, but curious on what original intention was to repost from such a long time ago, see link belwo

https://news.ycombinator.com/from?site=alexw.substack.com


You naively replaced deterministic process w probabilistic process - following a trend that is uneducated.

I am taking screenshots of blogposts like this for a museum exhibit opening next year - lmk if you’re willing.


We're not replacing deterministic processes with probabilistic ones, that would be insane for production data.

Here's what actually happens:

1. MCP exposes system schemas in a standardized way 2. AI analyzes the schemas and suggests mappings 3. Engineers review and validate every mapping 4. AI generates deterministic integration code (think: writing the SQL, not running it) 5. We test with real data before any production deployment


So you've not replaced ETL with MCP, you're just using LLMs to generate SQL.


Caveats: 1. It's 101 pages (do # of pages correlate with aggressive effort to be authoritative, e.g. 'state of the world' pdfs. 2. This appears to come from the AGI existential threat doomer camp - does it even have any validity? At first glance, it appears both absurd in terms of presumptive risks, and also AI generated (this is biased to the pages I'd read) 3. MLCommons has a more scholastic approach to ranking models on potential harms, curious on reception to all approaches (pros vs cons of each)


Not following exactly, so apologies if I'm misinterpreting, but I'm the author and updated this post (transparently) with nuance I'd recently learned about that explains this (somewhat) - the larger bills contain entire pages with only headings that contain emdashes - removed the headings from analysis so that the emdashes per page are only from the legislative text itself. For the conservatively / minimal difference, we're still looking at a 30% increase from a decent baseline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: