The biggest problem I hear from people running own MTA is monopoly of Google and Microsoft. If any of those will mark you as spam, its game over. You can have all the fancy DKIM/SPF/DMARC/whatever setup properly, it doesnt matter, your emails end up in spam folder. This is not bueno..
Haha yeah, seems so a bit.. There is really no problem with IP reputations at all.
If prefix is often moved and traded, just DROP it at edge.. because traffic will be malice anyway.. problem solved.
The really valuable prefixes are those with are stable and have good reputation on them.. Everything else is junk these days..
Hehe yeah.. For me, its just inverted search for the God. There must be somethink behind it, if its not God, then it must be simulation! Kinda sad, I would expect more from scientist.
The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...
>The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...
Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder.
Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural.
Yeah, agreed. The actual real riddle is consciousness. Why does it seems some configurations of this matter and energy zap into existence something that actually (allegedly) did not exist in its prior configuration.
I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious:
(1) It maintains a persisting internal model of an environment, updated from ongoing input.
(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.
(3) It possesses a memory that binds past and present into a single temporally extended self-model.
(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)
(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.
This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...
If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria.
How do you define these terms without begging the question?
If anything has, minimally, a robust spatiotemporal sense of itself, and can project that sense forward to evaluate future outcomes, then it has a robust "self."
What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience.
By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.)
Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense.
If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense.
The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.
"P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted).
I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm.
Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness.
We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride.
I have more or less the same views, although I can’t formulate them half as well as you do. I would have to think more in depth about those conditions that you highlighted in the GP; I’d read a book elaborating on it.
I’ve heard a similar thought experiment to your bitterness one from Keith Frankish:
You have the choice between two anesthetics. The first one suppresses your pain quale, meaning that you won’t _feel_ any pain at all. But it won’t suppress your external response: you will scream, kick, shout, and do whatever you would have done without any anesthetic. The second one is the opposite: it suppresses all the external symptoms of pain. You won’t budge, you’ll be sitting quiet and still as some hypothetical highly painful surgical procedure is performed on you. But you will feel the pain quale completely, it will all still be there.
I like it because it highlights the tension in the supposed platonic essence of qualia. We can’t possibly imagine how either of these two drugs could be manufactured, or what it would feel like.
Would you classify your view as some version of materialism? Is it reductionist? I’m still trying to grasp all the terminology, sometimes it feels there’s more labels than actual perspectives.
> The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.
Haven't you sort of smuggled a divide back into the discussion? You say "internal presentation" as though an internal or external can be constructed in the first place without the presumption of a divided off world, the external world of material and the internal one of qualia. I agree with the concept of making the quale and the material event the same thing, (isn't that kinda like Nietzsche's wills to power?), but I'm not sure that's what you're trying to say because you're adding a lot of stuff on top.
That is not what a p-zombie is. The p-zombie does not have any qualia at all. If you want to deny the existence of qualia, that's one way a few philosophers have gone (Dennett), but that seems pretty ridiculous to most people.
1. Qualia exist as something separate from functional structure (so p-zombies are conceivable)
2. Qualia don't exist at all (Dennett-style eliminativism)
But I say that there is a third position: Qualia exist, but they are the internal presentation of a sufficiently complex self-model/world-model structure. They're not an additional ingredient that could be present or absent while the functional organization stays fixed.
To return to the posthuman thought experiment, I'm not saying the posthuman has no qualia, I'm saying the red "TOXIC" warning is qualia. It has phenomenal character. The point is that any system that satisfies certain criteria and registers information must do so as some phenomenal presentation or other. The structure doesn't generate qualia as a separate byproduct; the structure operating is the experience.
A p-zombie is only conceivable if qualia are ontologically detachable, but they're not. You can't have a physicalism which stands on its own two feet and have p-zombies at the same time.
Also, it's a fundamentally silly and childish notion. "What if everything behaves exactly as if conscious -- and is functionally analogous to a conscious agent -- but secretly isn't?" is hardly different from "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!" The so-called "hard problem" either evaporates in the light of a rigorous structural physicalism, or it's just another silly dead-end.
You have first-person knowledge of qualia. I'm not really sure how you could deny that without claiming that qualia doesn't exist. You're claiming some middle ground here that I think almost all philosophers and neuroscientists would reject (on both sides).
> "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!"
These are all legitimate philosophical problems, Kripke definitively solved the first one in the 1970s in Naming and Necessity. You should try to be more humble about subjects which you clearly haven't read enough about. Read the Mary's room argument.
> You have first-person knowledge of qualia. I’m not sure how you could deny that...
I don't deny that. I explicitly rely on it. You must have misunderstood... My claim is not:
1) "There are no qualia"
2) "Qualia are an illusion / do not exist"
My claim is: First-person acquaintance does not license treating qualia as ontologically detachable from the physical/functional. I reject the idea that experience is a free-floating metaphysical remainder that can be subtracted while everything else stays fixed. At root it's simply a necessary form of internally presented, salience-weighted feedback.
> This middle ground would be rejected by almost all philosophers and neuroscientists
I admit that it would be rejected by dualists and epiphenomenalists, but that's hardly "almost all."
As for Mary and her room: As you know, the thought experiment is about epistemology. At most it shows that knowing all third-person facts doesn’t give you first-person acquaintance. It is of little relevance, and as a "refutation" of physicalism it's very poor.
There is no objective evidence of anything at all.
It all gets filtered through consciousness.
"Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli.
Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand.
That's a hypothesis but the alternate hypothesis that consciousness is not well defined is equally valid at this point. Occam's razor suggests consciousness doesn't exist since it isn't necessary and isn't even mathematically or physically definable.
Yeah I guess... But such question is not really intereseting.. Answer is simple, there is nothing behind it.. and people arent confortable with that answer. Hence "How" is more interesting and scientific..
I am more curious why you actually care? Are you doing Ruby coding for living?
This seems the only valid reason.
I love Ruby, I write a lot of tools in it. When I need extras, I extend Ruby using C. Its easy to do and very clean. I even write simple GUI stuff in Ruby, because why not?
And I dont really care if Ruby is loved or not. Its Open Source. I made backup of all stuff I care about. Its easy to just rebuild if necessary :)
Haha, It reminds me some old movie called Andromeda I think... Space human probe crashlanded on earth and contained some greenish patches of stuff on it. It was space dwelling orgamism that direcly used energy to matter conversion for growth. It was pretty decent movie actually :)
Yeah, kids like to waste time to make C more safe or bring C++ features.
If you need them, use C++ or different language. Those examples make code look ugly and you are right, the corner cases.
If you need to cleanup stuff on early return paths, use goto.. Its nothing wrong with it, jump to end when you do all the cleanup and return.
Temporary buffers? if they arent big, dont be afraid to use static char buf[64];
No need to waste time for malloc() and free. They are big? preallocate early and reallocate or work on chunk sizes. Simple and effective.
No, because I did NOT do serious analisis of this. Nor I care, ask upper commenter.. C have some corner case and undefined behaviours and this stuff will make it worse IMO.
My thoughts as well. The only thing I would be willing to use is the macro definition for __attribute__, but that is trivial. I use C, because I want manual memory handling, if I wouldn't want that I would use another language. And now I don't make copies when I want to have read access to some things, that is simply not at a problem. You simply pass non-owning pointers around.
In a function? That makes the function not-threadsafe and the function itself stateful. There are places, where you want this, but I would refrain from doing that in the general case.
It also has different behaviour in a single thread. This can be what you want though, but I would prefer it to pass that context as a parameter instead of having it in a hidden static variable.
What different behaviour you mean? static in function means that this is just preallocated somewhere in data, not on stack nor heap. Thats it. Yes, its not thread safe so should never be used in libraries for any buffering.
But in program, its not bad. If I ever need multiple calls to it in same thread:
static char buf[8][32];
static int z;
char *p=buf[z];
z=(z+1)&7;
Static foremost means that the value is preserved from the last function invocation. This is very different behaviour, than an automatically allocated variable. So calling a function with a static variable isn't idempotent, even when all global variables are the same.
> If I ever need multiple calls to it in same thread:
What is this code supposed to do???? It hands out a different pointer, the first 8 times, than starts from the beginning again? I don't see what this is useful for!
If you want to return some precalculated stuff w/o using malloc() free(). So you just have 8 preallocated buffers and you rotate them between calls. Of course you need to be aware that results have short lifetime.
That sounds like a maintenance nightmare to me. If you insist on static, I would at least only use one buffer, to make it predictable, but personally I would just let the caller pass a pointer, where I can put the data.
What application is that for? Embedded, GUI program, server, ...?
it is very predicatable.. Every call it returns buffer, after 8 calls it wraps.
I use such stuff in many places.. GUI programs.. daemons.. Most stuff are single threaded. If threads are used, they are really decupled from each other.
Yes, you should never ever use it in Library.. But small utility functions should be okish :)
This is example from my Ruby graph library. GetMouseEvent can be called alot, but I need at most 2 results. Its Ruby, so I can either dynamicaly allocate objects and let GC pickup them later, or just use static stuff here, no GC overhead. it can be called 100s of times per second, so its worth it.
static GrMouseEvent evs[8];
static int z=0;
GrMouseEvent *ev=&evs[z];
God forbid we should make it easier to maintain the existing enormous C code base we’re saddled with, or give devs new optional ways to avoid specific footguns.
Goofy platform specific cleanup and smart pointer macros published in a brand new library would almost certainly not fly in almost any "existing enormous C code base". Also the industry has had a "new optional ways to avoid specific footguns" for decades, it's called using a memory safe language with a C ffi.
I meant the collective bulk of legacy C code running the world that we can’t just rewrite in Rust in a finite and reasonable amount of time (however much I’d be all on board with that if we could).
There are a million internal C apps that have to be tended and maintained, and I’m glad to see people giving those devs options. Yeah, I wish we (collectively) could just switch to something else. Until then, yay for easier upgrade alternatives!
I was also, in fact, referring to the bulk of legacy code bases that can't just be fully rewritten. Almost all good engineering is done incrementally, including the adoption of something like safe_c.h (I can hardly fathom the insanity of trying to migrate a million LOC+ of C to that library in a single go). I'm arguing that engineering effort would be better spent refactoring and rewriting the application in a fully safe language one small piece at a time.
I’m not sure I agree with that, especially if there were easy wins that could make the world less fragile with a much smaller intermediate effort, eg with something like FilC.
I wholeheartedly agree that a future of not-C is a much better long term goal than one of improved-C.
A simple pointer ownership model can achieve temporal memory safety, but I think to be convenient to use we may need lifetimes. I see no reason this could not be added to C.
Would be awesome if someone did a study to see if it's actually achievable... Cyclone's approach was certainly not enough, and I think some sort of generics or a Hindley-Milner type system might be required to get it to work, otherwise lifetimes would become completely unusable.
C does have the concept of lifetimes. There is just no syntax to specify it, so it is generally described along all the other semantic details of the API. And no it is not the same as for Rust, which causes clashes with the Rust people.
I think there was a discussion in the Linux kernel between a kernel maintainer and the Rust people, which started by the Rust people demanding formal semantics, so that they could encode it in Rust, and the subsystem maintainer unwilling to do that.
One of them was a maintainer of that particular subsystem, but that doesn't mean that the other folks aren't also maintainers of other parts of the kernel.
It seems that people do NOT understand its already game over.. Lost.. When stuff was small, and we had abusive actors, nobody cared.. oh just few bad actors, nothing to worry about, they will get bored and go away. No, they wont, they will grow and grow and now most even good guys turned bad because there is no punishment for it.. So as I said, game over.
Its time to start do own walled gardens, build overlay VPN networks for humans. Put services there, if someone misbehave? BAN his IP. Came back? BAN again. Came back? wtf? BAN VPN provider.. Just clean the mess.. different networks can peer and exchange. Look, Internet is just network of networks, its not that hard.
Good idea. Another solutions is to move our things to p2p. These corporations need expensive servers to run huge models on or just collect data. Sometimes winning move is not play the game: true server-less.
reply