I always figured the bartender was just another philosophy or sociology or political science major who couldn't find a better job. Which I'd say is also a relevant modern issue. I know plenty of college students and grads working shit jobs.
Is it finicky? The stealth system is very basic and based on line-of-sight and sound (for human enemies). As long as you aren't running, it's actually incredibly trivial to stealth through levels without being seen. Even the bots aren't really a problem, since they have static patrol paths and limited line of sight.
There's a generation gap here. The information density in current game UIs is high with status indicators, aiming aids, location markers, objective lists. Players expect the game to be a DM giving them lots hints on how to proceed through the game. At the time Deus Ex was made one of the talking points was realism. Players wanted to experience the game as a simulation. The hand-holding that's expected today was frowned upon as breaking the immersion. The article briefly touches on this debate between realism vs playability with regards to level design.
yeah I was surprised by this comment as well. imo, deus ex has one of the better implementations of stealth mechanics (certainly better than recent games like far cry). maybe it was bugged for GP?
> On the other hand, for us "tech people" it is actually easier to stop using Google and Apple and serve as example for others. Pushing for anti-monopoly legislation needs not be the only thing that we do to solve that problem.
Are you suggesting people stop using smartphones altogether?
> Are you suggesting people stop using smartphones altogether?
I do not have a smartphone and my life is alright, but I was not suggesting that.
You can certainly have a smartphone that is not controlled by neither Google nor Apple. For example, by installing an OS like LineageOS, or even some "freer" stuff that is out there.
In a tech crowd like HN, I don't think that's a reasonable solution for most people.
App ecosystems have network effects. Do your {bank, 2FA vendor, paycheck vendor, retirement vendor, etc} all support apps for LineageOS?
Additionally, the number of tech jobs which support those smartphones, their apps, testing websites on smartphone browsers, and the related marketing all require many of us to participate in the core smartphone markets, even if we choose not to use them outside of work.
If somebody is able to take these compromises it is certainly senior software engineers. And yes, you can install any android app on lineageos, even if it does not have google services enabled.
I have a lot of trouble to force my mom (nearly computer illiterate) to live without google and similar stuff. But I bully mercilessly my co-workers for that. At least they agreed to use jitsi instead of zoom, so that's something
There are services that operate on Android with open source implementations of the Google proprietary services which have been designed for lock-in. https://e.foundation packages up the work from Lineage and MicroG and sells phones with the software preinstalled.
> Apple wants to own iPhone consumers so that anyone who wants access to that demographic must pay the toll and follow their rules. It's far too much power for one corporation. Nice to see authorities are starting to take the matter seriously.
Unfortunately, this seems like a problem where nobody has come up with a good solution. Google allows other app stores on their phones and they allow sideloading, which might seem like the ideal situation. The biggest issue then is that Google still has the monopoly on apps on Android because the size of the Play Store far eclipses every other store. So if Google bans an account or app unfairly, the only realistic option is to appeal and cave in whatever they want you to do. Because being relegated to an app store that nobody uses is essentially a death sentence for that app and its revenue.
How do you fix something like that? The power or the problem doesn't go away because there is choice. I feel like Google has neatly sidestepped the problem from their perspective and thus aren't being scrutinized, despite being in nearly the exact same situation as Apple.
A slightly different example. Internet Explorer essentially dominated the browser space for nearly forever until Chrome. There are any number of reasons, but the most obvious is that Microsoft could leverage their position as the platform owner to promote IE over all alternatives. EU tried to regulate it by forcing them to add a "browser choice" dialog, which didn't work. The only reason why Chrome became so popular was because Google did exactly the same thing as Microsoft. They could leverage their search engine, which by that time already owned 90% of the search engine market, to aggressively promote Chrome at every single opportunity.
There are all these examples of tech companies leveraging an existing platform to edge out competitors in clearly anti-competitive ways, and modern laws simply aren't up to the task of ensuring a fair, competitive market where massive companies like Google, Apple, Microsoft, Facebook, Amazon can't abuse their power.
2) Taking his argument in good faith, he also means "free" apps with microtransactions, because in both cases the same situation exists: The iOS App Store is too big a market to ignore. Which leads to the same problem, regardless of whether your app is paid or "free", that if Apple suddenly decides it doesn't like you, you're screwed.
It's worth pointing out that you can absolutely sell software to Linux users i.e. I pay for all JetBrains software and recently purchased Ripcord[1].
What I've noticed on macOS however, is that is a lot easier to sell things that should be bundled or FLOSS for relatively serious money. For example there are dozens of rather expensive "Finder replacements" on macOS and you'll have a hard time selling something proprietary like that to Linux users because we do have good file managers that are libre software already.
A cursory glance at Japanese culture and values shows me that America has had far less influence on its development than you believe. On the scale of individuality/community, Japan is further on the opposite side. The anime/manga industry is quite different than anything we have in the West, and a lot of their values tie into the things that are depicted, what's okay, what's not okay, etc. We often see traditional influences on their media that simply don't exist in the West in the same way (comparison of the American and Japanese versions of The Ring: https://www.tandfonline.com/doi/full/10.1080/01956051.2011.5...).
This is similar to the idea that a number isn't always a number. A number can be categorical, in which case comparison operators don't make any sense. They can be a label or a name, in which case addition or other mathematical operators don't make any sense. The same way a number can be an address, which has its own sets of operations that operate on the idea of an address.
As an example, statistics has the idea of ordinal values vs interval values, where they have different properties that make them not comparable to each other and shouldn't be mistakenly be confused when working on them.
Haha, I do view mathematics as "software engineering but then purely for numbers".
I view things like the quadratic formula as little software programs. I view complex numbers as the necessary "software system" to allow for negative square roots.
Other views I have on mathematics look at this perspective and shout "blasphemy! How could you insult such an intellectual pursuit!". But it works quite well for in me when I'm doing math.
I like your lesson. I never saw it that clearly. And with respect to numbers, I don't think I saw this at all.
Computational modelling is already something that's fairly widespread for trying to understand the brain. The only novel thing here might be how his model is designed.
Well yes, but it still seems relevant to just ask does this work? If you implement it, does it produce some kind of intelligent behavior? If not then you've disproven the hypothesis, and if so then it might have practical benefit.
I mean, the whole point of implementing this would be to see what insights can be gained.
From the study:
> The basic operations of the Assembly Calculus as presented here—projection, association, reciprocal projection, and merge—correspond to neural population events which 1) are plausible, in the sense that they can be reproduced in simulations
and predicted by mathematical analysis, and 2) provide parsimonious explanations of experimental results (for the merge and reciprocal project operations, see the discussion of language below)
This was an initial study, and I'm sure they're going to continue putting out papers exploring the model and how it compares with experimental data. It's not like everybody's going to see this one study and say "He's right! We should all use this model!" It's more that as more evidence is provided, the model becomes more relevant and it might be considered by others to be useful.
By "experimental data" do you mean something like "this had a pattern of neural activity that looks like what we've observed in biology," or do you mean "we tried driving a self-driving car with this method and it worked really well." The latter is more what I'm talking about, whether it's robotics, image classification, game playing, etc.
You do realize you can't just jump straight into "Let's see if this thing can drive a car", right? There's years of research and development that's going to happen before anything like that. And it's not like they're designing this to get a car-driving AI. They're trying to find an accurate model of the brain. Maybe, if results are promising, this can be turned into something like that in 10-20 years, but I wouldn't count on it. Maybe sooner if it turns out to be particularly promising. Chances are this is going to radically evolve in different directions. They might hit dead-ends. They might make valuable insights about how some parts of the brain work, but can't generalize it or go from there to a general problem solving intelligence. There are all manner of problems, and I don't think you realize how complex the brain actually is when you're starting from first principles like this. They're at the level of simulating what individual neuron clusters do. That's like looking at electrons interacting and expecting to build a bridge by manipulating them individually.
I do realize, but I'm not saying let's try to jump right into general intelligence. Deep neural networks do all sorts of reasonably smart things right now, including all the things I mentioned. Seems to me we are at a point where we can test neural models to see whether they actually perform functions useful to an organism.
The first neural networks appeared in the 50s (with significant limitations), and proper research into them and appropriate funding first started in the 80s. NN's didn't become a thing overnight, and neither will this. I'm not sure why you're expecting this thing to be proven now. Science is slow. It takes time to build evidence and "prove" things and figure out how useful a model is.
> Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology.
The theory is quite foundational for cognitive science and popular in a number of other fields.