> As noted, consciousness seems to just be the ability to self-observe, which is useful as another predictive input.
As far as I know, consciousness is referring to something other than self-referential systems, especially with regards to the hard problem of consciousness.
The [philosophical zombie](https://en.wikipedia.org/wiki/Philosophical_zombie) thought experiment is well-known for imagining something with all the structural characteristics that you mention but without conscious experience as in "what-its-like" to be someone.
It seems entirely possible that the "philosophical zombie" is an impossible/illogical construct, and that in fact anything with all the structure necessary for consciousness will of necessity be conscious.
When considering the structural underpinnings of consciousness, it's interesting to note the phenomena of "blindsight", which is essentially a loss of visual consciousness without an actual loss of vision!
Note that anything with mental access to it's own deliberations and sensory inputs will by definition always be able to report "what it's like" to be themselves - what they are experiencing (what's in their mind). If something reports to you their quales of vision or hearing, isn't this exactly what we mean by "what it's like" to be them - how they feel they are experiencing the world?!
> It seems entirely possible that the "philosophical zombie" is an impossible/illogical construct, and that in fact anything with all the structure necessary for consciousness will of necessity be conscious.
Yes, and that's pretty much exactly the point: we don't know of any way of determining whether someone is a p-zombie or a being with conscious phenomenal experience. We can certainly have an opinion or belief or assume that sufficient structure means consciousness, which is a perfectly reasonable stance to take and one that many would take, but we have to be careful to understand that's not a scientific stance since it isn't testable or falsifiable, which is why it's been called the "hard problem" of consciousness. It's an unfounded belief we choose out of reasons like psychological comfort.
With regards to your latter point, I think you are making some sophisticated distinctions regarding the "map and territory" relation, and it seems you've hit upon the crux of the matter: how can we report "what its like" for us to experience something the other person hasn't experienced, if its not deconstructible to phenomenal states they've already experienced (and therefore constructible for them based off of our report)? The landmark paper here is "What Is It Like to Be a Bat?" by Josh Nagel, and if you're ever curious it's a pretty short read.
With regards to "blindsight" since I'm not familiar with it and curious, how do we distinguish between loss of visual consciousness and loss of information transfer between conscious regions, or loss of memory about conscious experience?
I'm not sure how much, if any, work has been done to study the brains of people with blindsight. I'm also not sure I would differentiate between loss of visual consciousness and loss of information transfer ... my understanding is that it's the loss if information transfer that is causing the loss of consciousness (e.g maybe your visual cortex works fine, so you can see, and you can perform some visual tasks that have been well practiced and/or no longer need general association cortex, but if the connection between visual cortex and association cortex was lost, then perhaps this is where you become unaware of your ability to see, i.e. lose visual consciousness).
I don't think it's a memory issue - one classic test of blindsight is asking the patient to navigate a cluttered corridor full of obstacles, which the patient succeeds in doing despite reporting themselves as blind - so it's a real-time phenomena, not one of memory.
> Yes, and that's pretty much exactly the point: we don't know of any way of determining whether someone is a p-zombie or a being with conscious phenomenal experience.
That seems to come down to defining, in a non hand-wavy way, what we mean by "conscious phenomenal experience". If this is referring to personal subjective experience, then why is just asking them to report that subjective experience unsatisfactory ?!
I get that consciousness is considered as some ineffable personal experience, but as a thought experiment, what if the experimenter, defining themselves as "conscious" wanted to probe if some subject's subjective experience differed from their own, then they could at least attempt to verbalize any and all aspects of their own (the experimeter's) subjective experience and ask the subject if they felt the same, and the more (unconstrained) questions they asked without finding any significant difference would make it asymptotically unlikely that there was any difference.
> which is why it's [p-zombie detection] been called the "hard problem" of consciousness
AFAIK the normal definition of the hard problem is basically how and why the brain gives rise to qualia and subjective experience, which really seems like a non-problem...
We have thoughts and emotions, and mental access to these, so it has to feel like something to be alive and experience things. If we introspect on what having, say, vision, is like, or what it is like to have our eyes open vs shut, then (assuming we don't have blindsight!) we are obviously going to experience the difference and be able to report it - it does "feel" like something.
Qualia are an interesting thing to discuss - why do we experience what we do, or experience anything at all for that matter when we see, say a large red circle. Why does red feel "red"? Why and how does music feel different in nature to color, and why does it feel the way it does, etc?
I think these are also really non-problems that disappear as soon as you start to examine them! What are the differences in quales of seeing a small red circle vs a large red circle, or a large blue circle vs a large red one... When you consider differences in quales vs the fact that we experience anything at all (which is proved by our ability to report that we do). Color is perceived as surface attribute with a spatial extent, with colors differentiated by what they remind us of. Blue brings to mind water, sky and other blue things, Red brings to mind fire, roses, and other red things. Perception of color can be proven to be purely associative, not absolute, by Ivo Kohler's chromatic adaptation experiments, having the subject wear colored goggles whose effect "wears off" after a few days with normal subjective perception of color returning.
I'm actually curious here, because maybe our experiences are different. When you look at something red, before any associations or thoughts kick in, before you start thinking "this reminds me of fire" or analyzing it, is there something it's like for that redness to be there? Some quality to it that exists independent of what you can say about it?
For me, I can turn off all the thinking and associations and just... look. And there's something there that the looking is of or like, if that makes sense. It's hard to put into words because it's prior to words, and can possibly be independent of them.
But maybe that's not something universal? I know some people don't have visual imagery or an inner voice, so maybe phenomenal experience varies more than we assume. Does that distinction between the experience itself and your ability to think/talk about it track for you at all?
> And there's something there that the looking is of or like, if that makes sense
I think I know what you mean, but if you consider something really simple like the patch of a single color, even without any color associations (although presumably they are always there subconsciously) then isn't the experience just of "a surface attribute, of given spatial extent". There is something there, that is the same in that spatial region, but different elsewhere.
At least, that's how it seems to me, and isn't that exactly how the quale of a color has to be - that is the essence of it ?!
> then isn't the experience just of "a surface attribute, of given spatial extent".
I don't know why this seems to be so hard for me to think about and even put into words, but isn't "the experience of the surface attribute of a given spatial extent" something other than the experience of the surface attribute of a given spatial extent itself?
I mean that the words we use to describe something aren't the something itself. Conceivably, you can experience something without ever having words, and having words about a phenomenal visual experience doesn't seem to change the experience much or at all (at least for me).
Maybe another way of phrasing this would be something like: can we talk about red blotches using red blotches themselves, in the same way that we can talk about words using words themselves? And then, supposing that we could talk about red blotches using red blotches (maybe the blotches are in the form of words or structured like knowledge, I dunno), can we talk about red blotches without ever having experienced red blotches? I learned this idea from Mary's Room thought experiment, but I still don't know what to think about it.
Yes - the experience / quale has nothing to do with words.
The point (opinion) I'm trying to make is that something like the quale of vision, that is so hard to describe, basically has to be the way it is, because that is it's fundamental nature.
Consider starting with your eyes closed, and maybe just a white surface in front of you, then you open your eyes. Seeing is not the same as not-seeing, so it has to feel different. If it was a different color then the input to your brain would be different, so that has to feel different too. Vision is a spatial sense - we have a 2-D array of rods and cones in our retina feeding into our brain, so (combined with persistence of vision) we experience the scene in front of us all at once in a spatial manner, completely unlike hearing which is a temporal sense with one thing happening after another... etc, etc.
It seems to me that when you start analyzing it, everything about the quale of vision (or hearing, or touch, or smell) has to be the way it is - it is no mystery - and an artificial brain with similar senses would experience it exactly the same way.
Yep that's a cogent, serious stance, and it sounds a lot like illusionism (famously argued by Daniel Dennett) or functionalism if you ever wanted to check out more about it.
It’s a serious stance, but the really interesting thing to me here is that its not a settled fact. What’s quite surprising and unique about this field is that unlike physics or chemistry where we generally agree on the basics, in consciousness studies you have some quite brilliant minds totally deadlocked on the fundamentals. There is absolutely no consensus on whether the problem is 'solved' or 'impossible,' and its definitely not a matter of people not taking this seriously enough or making some rash judgments or simple errors.
I find this fascinating because this type of situation is pretty rare or unique in modern science. Maybe the fun part is that I can take one stance and you another and here there's no "right answer" that some expert knows and one of us is "clearly" wrong. Nice chatting with you :)
I've been in Europe for almost 2 months now and started seeing the GDPR banners a lot more often. I've yet to feel like I'm missing anything by either clicking reject all, or by avoiding the site if I can't reject all non-required cookies in a few clicks.
> Once you have worked a while in business or marketing, you will see that it's not that easy unfortunately
Nobody is forcing anybody to do this, this is a personal and business decision to make more money at the expense of users' well-being. When you're surrounded by lots of people that think a certain way, you start to see it as acceptable and even good.
Though I know lots of people that disagree, I personally don't think it's justifiable. If someone finds it justifiable, they should take responsibility for it.
My experience is that the source of all this is the fear of having a substantial disadvantage against the competition and having to defend your decision of sustaining such a perceived disadvantage against the CEO/board. Understandable from my point of view, even though I don't like the outcome. This then usually trickles down the hierarchy in companies and, yes, someone will somehow implement it to earn their living. I'd define the implication of losing your livelihood as a consequence of not doing what you are told as force, but that is open to opinion I guess.
An anecdote that might be worth mentioning in this context:
I was once told by some CEO that they didn't hire a really qualified person, because that person had enough money to not be dependent on the job. This is, in my experience, an appropriate reflection of the role of money in controlling people's decisions. It's essential that you are dependent so that you can be forced to comply or risk losing your livelihood.
My team's systems play a critical role for several $100M of sales per day, such that if our systems go down for long enough, these sales will be lost. Long enough means at least several hours and in this time frame we can get things back to a good state, often without much external impact.
We too have manual processes in place, but for any manual process we document the rollback steps (before starting) and monitor the deployment. We also separate deployment of code with deployment of features (which is done gradually behind feature flags). We insist that any new features (or modification of code) requires a new feature flag; while this is painful and slow, it has helped us avoid risky situations and panic and alleviated our ops and on-call burden considerably.
For something to go horribly wrong, it would have to fail many "filters" of defects: 1. code review--accidentally introducing a behavioral change without a feature flag (this can happen, e.g. updating dependencies), 2. manual and devo testing (which is hit or miss), 3. something in our deployment fails (luckily this is mostly automated, though as with all distributed systems there are edge cases), 4. Rollback fails or is done incorrectly 5. Missing monitoring to alert us that issue still hasn't been fixed. 5. Fail to escalate the issue in time to higher-levels. 6. Enough time passes that we miss out on ability to meet our SLA, etc.
For any riskier manual changes we can also require two people to make the change (one points out what's being changed over a video call, the other verifies).
If you're dealing with a system where your SLA is in minutes, and changes are irreversible, you need to know how to practically monitor and rollback within minutes, and if you're doing something new and manually, you need to quadruple check everything and have someone else watching you make the change, or its only a matter of time before enough things go wrong in a row and you can't fix it. It doesn't matter how good or smart you are, mistakes will always happen when people have to manually make or initiate a change, and that chance of making mistakes needs to be built into your change management process.
>My team's systems play a critical role for several $100M of sales per day, such that if our systems go down for long enough, these sales will be lost.
Would they? Or would they just happen later? In a lot of cases in regular commerce, or even B2B, the same sales can often be attempted again by the client for a little later, it's not "now or never". As a user I have retried things I wanted to buy when a vendor was down (usually because of a new announcement and big demand breaking their servers) or when my bank had some maintainance issue, and so on.
It's both (though I would lean towards lost for a majority of them). It's also true that the longer the outage, the greater the impact, and you have to take into account knock-on effects such as loss of customer trust. Since these are elastic customer-goods, and ours isn't the only marketplace, customers have choice. Customers will typically compare price, then speed.
It's also probably true that a one-day outage would have a negative net present value (taking into account all future sales) far exceeding the daily loss in sales, due to loss of customer goodwill.
It would be a serious issue for in person transactions like shops, supermarkets, gas stations, etc
Imagine Walmart or Costco or Chevron centralised payment services went down for 30+ mins. You would get a lot of lost sales from those who don’t carry enough cash to cover it otherwise. Maybe a retailer might have a zapzap machine but lots of cards aren’t imprinted these days so that’s a non starter too.
Not just lost sales. I've seen a Walmart lose all ability to do credit card sales and after about 5 minutes maybe 10% of people waiting just started leaving with their groceries in their cart and a middle finger raised to the security telling them to stop.
It depends on the business. It's not uncommon for clients to execute against different institutions' systems, and they can/would re-route flow to someone else if you're down.
Think less "buying a car" and more "buying a pint of milk". If you're buying a car and the store is closed, you might come back the next day. If you're buying milk you will just go to the store down the street.
I imagine same with time based or opportunistic businesses. If the shopping channel (assuming it runs around the clock) couldn't process orders, they'd have to decide if they want to forgo selling other products to rerun the missed ones.
For certain types of entertainment like movies or sports, the sale may no longer be relevant.
You're talking about the dog not minding being in the crate since you've taught them its nice and cozy. In that case why does it have to be a crate, why not say, an indoor doghouse?
As far as that goes, there's nothing wrong with that and that's not the part that people actually have problems with (but it's a nice strawman to argue against).
The part that is cruel and abusive is locking them up when nobody is at home so they can't damage your possessions. If there is some emergency like a fire, intruder, something falling down, etc, they can't do anything about it.
A salaried role is paid the same regardless of how long one works. A rationally run business should care about what's produced, not the amount of labor-hours it takes to produce it. Developer productivity varies wildly, so in a fair labor market, time worked and compensation should vary with developer productivity (sometimes compensation is correlated with time worked, but generally at diminishing marginal returns).
Of course there is a dynamic between the business and the employee when it comes to their expectations of each other. All else being equal, a business would like to get more output per dollar spent, and an employee would like to get paid more in total and work fewer hours. Nowhere in the goals of this dynamic does hours worked come into the picture. What does happen is that businesses believe they would get more output per dollar spent if they can get a salaried employee to work more hours, so they pressure employees into doing so. People generally like to be in charge of others, so un-enlightened managers force employees to be at the office because they like seeing them there.
Enlightened managers care first about cultivating great relationships, secondly about the total output of an employee, and therefore not at all about hours worked. Marginal productivity per hours worked eventually goes negative as hours worked increase, and in my opinion the point at which it becomes negative is a lot lower than most people believe (probably ~20-30 hrs/week over the long term).
Besides, highly productive developers are in very high demand. You're just shooting yourself in the foot if you don't give them a fair deal, because they'll go somewhere else, unless they're on a work visa in which case they'll remember if you don't treat them well.
Please do note that this argument applies mostly to salaried employees in knowledge-work.
> Besides, highly productive developers are in very high demand. You're just shooting yourself in the foot if you don't give them a fair deal, because they'll go somewhere else
Ok, but "we expect you to work full time when we hire you for a full time job" is a fair deal. It is not "unfair" to hire people full time rather than part time.
Socratic dialogue, where everyone is intent on building up a shared truth. Socrates talks about these two different ways of engaging in dialogue in Plato's works.
Ironically, Plato's Socrates rarely ever actually arrives at what he considers truth, most dialogues just end at an impasse. This is arguably why they're still so readable. They're more about teaching you how to think and question what you know than proselytizing some particular doctrine.
As far as I know, consciousness is referring to something other than self-referential systems, especially with regards to the hard problem of consciousness.
The [philosophical zombie](https://en.wikipedia.org/wiki/Philosophical_zombie) thought experiment is well-known for imagining something with all the structural characteristics that you mention but without conscious experience as in "what-its-like" to be someone.