Hacker Newsnew | past | comments | ask | show | jobs | submit | a235's commentslogin

I gave a go with a less popular language. This is a clone of Duolingo from the first glance, and does not place you into a correct level after examination.

Funny vibe code glitche in how an excersirce gives away the answer in a transaction of each question.


Maintainer or curl gave recently a talk on AI slop in security reports, showing this and other examples:

https://youtu.be/6n2eDcRjSsk?si=p5ay52dOhJcgQtxo -- AI slop attacks on the curl project - Daniel Stenberg. Keynote at the FrOSCon 2025 conference, August 16, in Bonn Germany by Daniel Stenberg.

Plus, linked above, his blogpost on the same subject https://daniel.haxx.se/blog/2025/08/18/ai-slop-attacks-on-th...


How does GraphScope graph backend compares to the previous one built by Alibaba - Euler? https://github.com/alibaba/euler/wiki


The learning engine in GraphScope (aka. GIE) has a simiary programming interface and functionlity with euler. They both support graph neural network training, e.g., GraphSAGE, GCN, GAT, etc. However there are many differences under the hood.

1. The programming model is different. Euler provides a message-passing style API to define new graph models, while GIE provides a sampling API first, and abstracts a batch of seed nodes or edges(named ‘ego’) and their receptive fields (multi-hops neighbors) as a "EgoGraph", which can be turned into a "EgoTensor" as features.

2. Euler implements operation on graphs as tensorflow ops, while GIE takes a more flexible design and doesn't couple with a specific machine learning framework. Upon the sampling interface, developer could build their GNN models using either tensorflow, pytorch or any other machine learning framework, and even for non-GNN tasks.

3. Most importantly, euler is just for graph learning. In graphscope, the learning engine could co-work with the analytical engine and with the interactive query. In real world cases usually so complicated and the problem is not just a GNN training task, and different dedicated systems may involve for different kinds of workload, that means users need to move and transform data back and forth between systems. In GraphScope, engines for different proposals share the same graph, and, live in the same jupyter notebook to delivery the ability of one-stop large-scale graph computation. That is more user-friendly for data scientists.


alibaba/euler is a lightweight library built specifically for GNN sampling only.

While GraphScope is built for many kinds of graph tasks such as gremlin, graph analytics and GNN sampling.

You can check this example to get an idea what GraphScope can do. https://nbviewer.jupyter.org/github/alibaba/GraphScope/blob/...

The graphs on GraphScope is backed by vineyard (https://github.com/alibaba/libvineyard). And that enables GraphScope to have multiple specifically optimized runtimes (written in C++, rust and Python) for different tasks shares the distributed graph data in memory efficiently.


These type of experimental interfaces what would make me to consider ordering one! Would love to try now


Done, now I'm waiting for my pre-order to arrive.


Wow, another "God-of-the-gaps" fallacy paper. Mixing all the things: math theory, algorithmics, evolution, genetic optimization algorithm in its simplest math model. All to make the conclusion that the authors could not understand those things all together.

On Goldbach’s conjecture: "Hypothesizing that the new axiom is true requires faith; faith in its consistency within the formal system"

And concluding: "However, not all possible phenomes can be produced by genetic algorithm. So it is a matter of faith to believe that every existing phenome [..] could be the result of material things interacting with one another."


Are there any specific errors you've identified?


FB published a bit on selection of the fact checkers - those are independent journalist organisations.

I'd say contrary - that would be so wrong for FB to verify facts in-house. This way they are more transparent


The sea shepherds are the key in seeding the legal debate - they are documenting and exposing whale hunting as it is.

Without this documentation and tracking, the whalers will continue to hide from anyone's view.


They have a HUGE bias, I doubt anything coming out of them is the actual truth.

There are around 500,000 minke whales (the type they hunt) in the Antarctic, Japanese whaling kills approximately 300 whales per year.

Please explain to me why you think this is such an issue?


There are 8 billion humans. So we should be allowed to hunt a few hundred thousand a year without anyone having a problem with it.

The problem isn't primarily about the numbers of minke whales. It's that they are hunting what many people consider a sentient species, in an inhuman way. In most parts of the world, you aren't allowed to slaughter an animal (at least in theory) in a way that causes pain or distress. With whales there is no reasonable way of doing this due to their size.


A good introduction to the moral problems of animal slaughter is Charles Patterson's Eternal Treblinka: Our Treatment of Animals and the Holocaust.


I would argue that the cause of the "Fake news" epidemic is partially due to the social media. Specifically, Information Overload is to blame.

A few decades back we had and now loosing: - Traceable reputation -- which is now lost, and much harder to learn in our uncontrolled, random and sparse feed interactions. - Focus on topic -- social crises were build up, considered, reflected on. Nowadays, we moved into the realm of emotional reactions. Noone is expected to have mental capacity to consume as many pieces of disjoint information as we do today on social media. In an average feed, a president eats a kid, and a cat does a funny trick one after another.

The prime incentives in the current social system is the instant gratification. This new system is easy to abuse, as we are moving further away from analytical to emotional consumption.


> Information Overload is to blame.

The information overload is due to centralization and advertising.

Individuals can easily overcome information overload via compartmentalization, but Facebook works directly against that strategy by mixing sources, and mixing in advertisements.

What we need to provide individuals is a decentralized communication network that has all the features and accessibility users want.


Sadly, all ocean studies are underfunded due to a general lack of public interest. "You don't see - you don't care", and pictures like these are the first steps to draw people's attention. Thank you!


Also sad from the AI perspective, this was a great, public human-labeled dataset.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: