For me it depends a lot on the context. JSON is often very human readable (as long as it's not too deeply nested), fairly well defined (compared to CSVs), and most languages and software have easy out of the box support for parsing and manipulating it.
If I were building a system that had to deal with large amounts of tabular data that isn't directly consumed by humans, JSON wouldn't be my first choice nor my last.
It's interesting that JSON is still the format of choice for transmitting tabular data to SPAs and mobile apps. Granted, it's likely compressed. But still seems something more efficient like CSV would be better.
> Garbage collection tasks are postposed to when we're asleep, but they can only be put off for so long and there's a capacity for "stuff to consolidate later" that fills up
Do we have a strong reason to believe that? I know brain and sleep mechanisms are tricky topics with lots of unknowns, but I thought I had read research that showed sleeping brains likely perform a chemical analogue to "garbage collection".
Indeed, it's called the glymphatic system. We evolved to have a circadian system that drains brain waste products during sleep. There are probably other unknown neurological consequences of sleep that remain to be detailed.
In addition to the chemical ones others discussed, there is informational "garbage collection" as well. There are a bunch of motor tasks that only show certain types improvement after sleep - not time, sleep. There are also rodent studies where you can do a learning task and then "clear" part of their hippocampus using optogenetics. If you clear it before sleep, they forget what they "learned" if you clear it after sleep they still remember it. Again, multiple clever conditions to control for time and separate it from sleep.
It's unlikely for sleep to only do one thing. At a minimum it reduces caloric expenditures and accumulation of damage or risk through activity. It also allows time to heal injuries without the need for inflammation to immobilize the area.
Really lack of movement is probably useful and harmful for all kinds of things. REM sleep probably has some brain benefits but moving the eyes may relate to circulating fluid within the eye etc etc.
I can't shake the feelings that a trillion or a quadrillion parameters won't solve the fundamental shortcomings of ML models not being models of artificial intelligence. I guess there's no way of knowing until we reach AGI, but I've never heard a compelling argument for why pure ML would get us there. GPT3 seems more like an argument against that hypothesis (in my view) than for it. Even the best, most expensive models today are incredibly brittle for enterprise usecases that shouldn't necessarily require AGI.
I've always imagined AGI (perhaps naively) as being achieved by clever usage of ML, plus some utilization of classical/symbolic AI from pre-AI winter days, plus probably some unknown elements.
I feel that for there are three requirements for a NN-based AGI, inspired by biology:
a). an internal feedback loop that evaluates a possible output without actuating it, and self-modifies the parameters if the possible output is not what it's needed
b). the capability (based on a) to model own behaviours without acting on them, and to model other agents behaviours and incorporate that model into the feedback
c). the ability to switch between modelling own behaviour and other agents behaviour intentionally by the model itself - as part of the feedback loop
i.e. what I feel it's totally missing in the self-driving cars today is the capability to model OTHER traffic participants actions and intentions; an experienced and attentive human driver does this all the time, pays attention to the pedestrians on the side if they want to jump in front of the car, pays attention to where other cars are LIKELY to go, pays attention to how the bicyclist that's currently overtaken may fall, even pays attention to random soccer balls flying out of a courtyard because a kid may be chasing that. I am not seeing any driving car trying to model any agent outside its own.
Cruise actually consider both social dynamics and uncertainty (i.e. what can hide behind an obstacle, or where are pedestrians/bikes/cars likely to move to).
If you are interested in self-driving cars, I can highly recommend their presentation from November 2021:
That's what gets me about self-driving cars. The road is a very social space, and follows social rules. Pretty much all of the communication and norms happening on the road are social ones.
The thing that would convince me AGI is ready would be to play a convincing game of poker. Or join in on a conversation mid-way through, listen to it, and engage with it actively. Show that machines are able to pick up on social cues, understand them, and learn new ones. It's a high bar, yes, but it's in my opinion a prerequisite for a self-driving car that's able to share roadways with other cars, cyclists, and kids playing in the street.
Well the theory for the end-to-end image based self-driving models is that they are supposed to cover that.
The reasoning is that given enough training data the system would know the pedestrian is going to jump out or the cyclist is going to fall just based on sheer volume of training examples. It would have seen that scenario tons of times in the image data.
Whether that will actually work is the question though
Personally I think that biology may be a flawed approach for most applications. Although the others arr worthy ends in themselves just for its role in understanding ourselves in a forensic archaeologist try to replicate sort of way, let alone any potential insights to biological brains.
Biology is glacially slow in comparison and one of the advantages from computing is being fast.
I believe that not modeling it is partially by design as a result of responsibility and blame frameworks. If you depend upon possible actions taken by others to be safe you are reckless. Extrapolating from current motions is more reliable than trying to profile everything. "They are moving towards the street at 3mph and 20 ft away, their vector will intersect with car, brake to avoid collision or accelerate enough to leave intersection zone before they can even reach us" seems a more reliable approach. It isn't like a kid will suddenly teleport into the road.
Computer vision, image recognition, audio recognition, speech recognition were somewhat easy when Moore's law kicked in and when computer software industry emerged. But AGI is whole another beast. For general intelligence you need to have underlying infrastructure that runs it and guides it just like nervous system does for us people or like operating system does for computers. You can not for example glue together computer vision and speech recognition and call it intelligence when all it does is recognize what it sees and what it hears.
My observation with statements like this both for and against some event occurring is that you'd have to be very specific with the definition of "AGI" and "human intelligence", otherwise everyone ends up claiming they predicted the outcome correctly (e.g. ray kurzweil's prediction evaluations seem to me like an exercise in motivated reasoning)
I've always imagined AGI (perhaps naively) as being achieved by clever usage of ML, plus some utilization of classical/symbolic AI from pre-AI winter days, plus probably some unknown elements.
For what it's worth, this is my view as well. And I don't think it's particularly naive. Plenty of people have researched and/or are researching aspects of how to do this. But how to combine something like a neural network, with it's distributed (and very opaque) representations, with an inference engine that "wants" to work with discrete symbols is non-obvious. Or at least it appears to be, since nobody apparently has figured out how to do it yet - at least not to the level of yielding AGI.
but I've never heard a compelling argument for why pure ML would get us there.
The simplistic argument would be that ML models are, in some sense, trying to replicate "what the brain does" and it stands to reason that if your current toy ANN's (and let's be honest - the largest ANN's built to date are toys compared to the brain) are something like the brain, then in principle if you scale them up to "brain level" (in terms of numbers of neurons and synapses), you should get more intelligence. Now on the other hand, anybody working with ANN's today will tell you that they are at best "biologically inspired" and aren't even close to actually replicating what biological neural networks do. Soo... while people like Geoffrey Hinton have gone on record as saying that "ANN's are all you need" (I'm paraphrasing, and I don't have a citation handy, sorry) I tend to think that in the short term a valid approach is exactly what you suggested. Combine ML and use it for what it's good at (pattern recognition, largely) and use "old fashioned" symbolic AI for the things that it is good at (reasoning / inference / etc.)
It seems quite clear to me that human brains are not actually doing much symbolic logic. What symbolic logic we do do has been bolted on using other faculties. I think the problem is that reasoning about our own minds is incredible tough. We want there to be some sort of magic sauce to what makes us, us and so we reject things like ANN's that seem somehow too simple. I think it probably is right that we won't just be able to scale up the number of parameters and get human like performance. There are hints that returns start to level off, but I'm also unsure why people are so sure we can't.
It seems quite clear to me that human brains are not actually doing much symbolic logic. What symbolic logic we do do has been bolted on using other faculties.
I agree. But my interest is in engineering something that works, not necessarily in creating an exact replica of the human brain. That's why my interest falls into the domain of symbolic / sub-symbolic integration - because it strikes me as a faster path to more usable computer intelligence.
I have no problem believing that a sufficiently large ANN, with the right training and inference algorithms, could achieve AGI. My problem is that A. right now achieving that seems very out of reach to me (but I could be wrong) and B. it seems unnecessary to me to remain wedded to the idea of 100% (or even 90% or 80% etc.) fidelity with our biological brains. After all, if we want something just like a human brain, we just need a man, a woman, and 9 months of time.
Anyway, I think it's OK to think of engineering in "short cuts" by using things we know computers are good at, and things we already know how to do, and trying to combine them with ANN's in such a way as to make something useful. Will it ever yield AGI? I have no way of knowing. And even if it does, would that approach actually be faster than a pure ANN approach? Again, I don't know. But for now, I spend my time on symbolic/sub-symbolic integration nonetheless.
I think the problem is that reasoning about our own minds is incredible tough.
Very fair takes. I could certainly imagine elements being pulled in. For example things like alpha zero are to my understanding already coupling things like tree searches to neural nets. I sort of expect that any general solution would include some of that, but symbolic approaches seem to consistently do worse despite lots of people thinking they won't and plenty of money to be made. I think part of the problem is that what we want with AI is to interface with humans, and humans are using something fuzzy to understand the world so trying to model that rigidly will be hard
Even if they did replicate how the brain works our brains aren’t one of these networks trained for specific things it is millions, maybe billions, of them combined.
Indeed. The learning / training we do today for ANN's clearly isn't what humans do. So yeah, even if we had billion "neuron" ANN's that were more biologically plausible, we'd probably still have to figure out more about how human learning works, in order to come up with the right way to train the AI.
AGI seems hard because each year more and more problems that were previously considered close to AGI are solved.
Playing Chess at a grandmaster level was considered something only a human could do until the 1990s, and now no human has beat the best computer in 17 years while AGI seems further away than ever.
Mark my words: we'll create an AI that can pass the Turing test this decade, but we'll still be as far away from the badly defined general problem as we ever were.
The chess example is not that strong: "the best computer" or more precisely the software that beats humans since 1990s was actually specifically designed to beat chess. That was the case until AlphaZero did the same in 2017 for the whole class of turn based games.
To add to that, it is quite possible that AlphaZero is already a general intelligence. Specifically, it may be that given some robotic manipulation, and some goals in real world, and lots and lots of tries (tens or hundreds of millions) it may beat an average human in "life".
I agree. I read Jeff Hawkins book On Intelligence [0] back when it came out, and it had a profound effect on my thinking. Chasing more data, aka "parameters" doesn't seem to be the right answer. I think more of a Bayes model like spam filtering, but cobbled together with other Bayes models looking at other things until something emerges that we call "intelligent". Heck, I'd consider Google's spam filtering pretty intelligent today.
Hawkins way of thinking really maps well for me also. It seems like that more parameters helps until it doesn't, then you need to encapsulate those networks and pin them to some reference frame, they create hierarchies of these networks and a system to generalize and compress those hierarchies (aka patterns), rinse and repeat.
My brother just became a grandpa and I was watching his grandson navigate the world this past weekend. It's unbelievable how quickly the brain can extrapolate a new relationship between objects/actions/etc and then apply it elsewhere. Minimally you see it in the drinking action applied to all sorts of things, this sort of repetitive clenching/releasing of the fingers to find things to grip without looking, etc etc. Watching mom use a fork and very quickly understand how to grasp and manipulate it. The model of just training everything from exogenous data into a flat network seems like it will hit some asymptotic limit.
Scaling hypothesis says that we just need more processing power to achieve things we regarded as "impossible for non-intelligent agents". So far, scaling hypothesis is proving itself correct despite still prevailing skepticism.
it would be pretty embarrassing (or relieving?) if it eventually turned out there was nothing special about human intelligence, just that we crossed some threshold of neurons and other brain bits to ("a few quadrillion parameters") to convincingly fool ourselves that we are self aware, have agency, and do anything "intelligent" (other than some fancy stuff that looks like the physics/biology equivalent of state of the art ML).
I am a proponent of using a working theory that intelligence is an emergent property and we can in principle create new intelligences in a lab (or ML warehouse) if we provide the proper conditions, but that finding and maintaining those conditions is extremely hard. Some state of the art research today aims to integrate recognition capbilities (image recognititon and object detection/tracking on video, voice extraction from audio, text) with advanced generative models for language and behavior, as well as realtime rendering systems that can create realistic humans.
if we combine those we can make a bot that appears fully interactive, passes all turing tests, convinces typical person it's another person... and still has nothing inside researchers would call "artificial intelligence". It might even solve science problems that we can't without having any spark of creativity or agency. Or maybe when we make a bot with all those properties, some uncanny valley is crossed and out pops something that has objective AGI?
As the wise robot once said, "if you can't tell the difference, does it really matter?". We should forge ahead with building datacenter-scale brains and feed them with data and algorithms, while also maintaining a cadre of research scientists who are attuned to the ethical challenges of doing so, an ops team trained to recognize the early signs of sentience, and an exec team with humanity.
I'd say that the view against ANN gives humans (especially researchers) more "dignity", in the sense that we still need to figure out some deep stuff and not just add hardware. I wouldn't treat this as an argument either way, just an observation.
Heuristically, we came to be by a very dumb process of piling up newer generations. If my pet would communicate with me on the level of GPTx, I would be very impressed. That's why nowadays I have some scepticism for the ANN critics' arguments, though think it would be neat if they were right.
The thing that I dislike the most in these discussions is the pervasiveness of the AGI concept and the assumption of a linear scale of intelligence. Again, I can intuitively say that I'm more intelligent than my pet: but to quantify this, we'd need to use something silly like brain size, or qualitative/arbitrary things like "this being can talk". I think that human intelligence is a somewhat random point in a very multi-dimensional space, one that technology may never even have a reason to visit. But people tend to subscribe to the notion that this is the very important "point where AGI happens".
> If my pet would communicate with me on the level of GPTx, I would be very impressed.
GPTx is not communicating with anyone. It is generating text that resembles text it had in its training set. The fact that human text is normally a form of communication doesn't make generating quasi-random text communication in itself. GPTx is no more communicating than a printer is when printing out text.
A cat or dog leading you to their empty food bowl is actual communication, and they are capable of much more advanced communication as well (especially dogs). The fact that it doesn't look like written text is not that relevant. They are of course worse than GPTx at producing text, just like they are worse than a printer at writing it on a blank page.
I'm most of the way towards agreeing with you, but I think you underestimate how far you could get without any major changes. Most of the brain consists of feed-forward processing, and what closed loops exist are probably replacements for backprop rather than essential to cognition. That's all the low level processing, from visual to motor. Now obviously we have higher level processing too, and it might be super weird! But no model we've made comes close to the size of even specialized brain regions, and study after study has demonstrated the power of the subconscious mind. Once we have big enough models, we might find out that all we need to take it to that final step is a while loop.
Does it really matter it? If this new supercomputer means that ML engineers can iterate x% faster which in turn increases FB's profits by even a small y%, I would think this would have already paid for itself.
We are so far from having “real” AI that it is amusing to me every time I read yet another article gushing over ML. ML is fundamentally pattern matching. It is impressive tech for what it does. But humans doesn’t need 1 million carefully tagged images of chairs or cars to work out what a chair or car is. Our understanding of what general intelligent is hasn’t progressed much since the last AI winter. The only real difference is that computers are much faster today, enabling old technology ideas to be fast enough today for practical use.
Likely a joy for Rust and a desire to try something ambitious that hasn't really been done.
Rust doesn't have the mature libraries that C# offers, but it isn't barren anymore either. It does offer nice assurances like entirely avoiding race conditions by default (when using safe Rust). Rust often leads to performant code, but it's hard to say if that's really a feature of the language or just a bias due to the kinds of people using it and the projects they work on.
If you're just looking to get into game development without any other goals in mind, then C# is a safer choice
>Rust often leads to performant code, but it's hard to say if that's really a feature of the language or just a bias due to the kinds of people using it and the projects they work on.
It does make memory management much more explicit, which makes it at least easier to write performant code.
The question however is what your program is doing. If it's limited by network requests or something similar with high latency you won't be able to optimize much on the language side of performance.
If however you're processing a lot of strings, the safe borrowing mechanism can help you avoid most copying.
For example on my previous job we had a part of our CI build that would parse a huge text file and combine it with some information from an ELF's debug info in Perl. It took 45 mins for one execution. It was also impossible to read.
I rewrote it in F# which after some heavy optimization work worked through it in 2 mins.
I had just heard of Rust and rewrote it in Rust, basically transforming F# to Rust syntax and trying to eliminate as many copies as possible. Now the whole execution took 8s.
So at that point I was sold on Rust, however the library ecosystem was still lacking. Nowadays it's much better.
> I had just heard of Rust and rewrote it in Rust, basically transforming F# to Rust syntax and trying to eliminate as many copies as possible. Now the whole execution took 8s.
For what it is worth, I've had a very similar experience with the re-write of an internal tool in Rust.
I was in pure disbelief, running the next command in the toolchain this tool runs in, expecting it to fail on some empty input files but it worked. 20x speed-ups almost feel like some essential work must have been missed or skipped but when you realize that's not the case, it is a different kind of pleasure.
I hope this will encourage big studios to stop releasing broken games, but I doubt it will. The incentives are just so broken due to ease of patching, a need/desire for cash after a drawn out dev process, and a general disrespect for their customers.
I think releasing a "broken" game in the form of "early access" from smaller studios can be good in terms of iterative and community development, but also that can be abused too. These bigger studios really don't have as much of an excuse in my opinion.
The only solution I see is to stop pre-ordering games and don't reward studios that do this, but easier said than done.
Yeah, the other situation is that PC players have oftentimes gotten buggy and unplayable ports from consoles for years, and now that a PC first development studio wound up screwing console players first it’s much more visible and the outcry worse. They never launched simultaneously on 9 platforms at a time. Heck, not a lot of titles do that at all that are well established veteran studios across many prior console generations.
CDPR is no saint but compared to the rest of the industry they are relatively. I’m thoroughly enjoying it despite some small bugs here and there but given the massive size of the game and the really ambitious stuff they’ve done in animation it’s amazing what they got done in the time window they had between Witcher 3 and the original launch timeframe.
I hope CDPR learns the right lessons though and focuses upon engineering management and how to rein in their marketing better.
a lot of CDPR staff left in the wake of TW3 because of crunch and other various factors (presumably, they weren't as well compensated for such a successful title as they presumed they would).
CDPR cannot replace talent that's gone - and it shows in cyberpunk.
This seems to be a longstanding trend in game dev houses across the industry worldwide, so the question is whether it was sufficient enough to cripple Cyberpunk development into its current state and whether the churn was worse than industry averages. It's not clear whether spending more money on developers would have been sufficient to make release better given Cyberpunk is the most expensive game in history and with only 30% of the budget spent on development.
It's very clear that spending more money will not retrieve talent lost - just because you're hiring more people doesn't mean the old, internalised expert knowledge from old staff can be replaced.
The graphical details are amazing and the art direction great in cyberpunk (disregarding the bugs and stuff on console). But it's very much a high production value game, but without the singular vision that would've made it an excellent immersive sim game (ala Deux Ex). And this missing singular vision is due to the lost staff that had this element for TW3.
> I don't regret pre-ordering at all, because they have always done right by me.
Same here. I bought it on PC via GOG, and while it was rough and unstable, I happily put about 100hrs of game play into it.
Honestly, they should have released it as under-development on platforms like Steam that support such designations. It's pretty common for studios to release games in an alpha/beta state. At least that way, gamers would know that they are getting a potentially buggy release.
For consoles, they straight up should not have released it until v1.07 at minimum. That's where they really screwed up. The game is in a much less playable state on the PS4, and console gamers in general are used to a much more polished gaming experience.
I don't get how they would draw _so _damn _much attention to a broken game. The marketing team should have talked with development every once in a while...
Look at me! Look at me, I got these awesome new pants and I also shat in them.
> I don't regret pre-ordering at all, because they have always done right by me.
I bought the game a few days ago and, apart from the NVIDIA RTX timed-exclusive, I haven't run into any issues with it. It's already a touch above the usual Witcher release.
It's also worth noting that, at least from my perspective, there was a vocal crowd begging for the game on account of "something to do during the pandemic, assuming bugs warts and all". I'm not sure if this is why they released an objectively non-functional game to previous-gen consoles, but I would be inclined to believe the excuse.
I don't think the size of CDPR is the reason here. They are public company now. Which means they aren't lying to some potential players, but instead they are lying to the investors.
They made promises about work environment that weren't fulfilled.
They blatantly lied about previous gen consoles performance. PC gamers are used to games not running well, but main appeal is that games usually run within the baseline. CP2077 is unplayable on PS4 and XB1.
CP2077 isn't TW3 level of unfinished. I think CDPR has bitten off more than they can chew.
I think CDPR made a lot of serious marketing mistakes but hype building is largely organic - rando game reviewers end up being a large portion of the momentum behind hype trains.
The cyberpunk game subreddit was its own worst enemy: people were winding themselves up with hype that was never ever going to be even remotely filled, people speculating and dreaming about features and stuff that was patently never going to happen, etc.
I could honestly never understand this. There's been massive hype over this game for like a full year before release. But we didn't even know what the gameplay was even like until like a month prior to release because they had shown basically nothing of it aside from a few videos of V looking into mirrors and what not.
People should have learned this lesson after No Man's Sky. If people are merely conjecturing about possible gameplay and features that they don't even know will be in the game, they're going to be dissatisfied with the game.
It's a tragedy of the commons situation with gamer enthusiasts acting against their own best interest.
If people can't delay gratification for something as inconsequential as "non-broken video games", I don't see how any personal responsibility campaign has any chance of working for things impacting society at large such as climate change, overfishing, public health, etc.
I don't think it's fair to blame this on gamers. With all of the hyperbole over various games being "broken", most people that are hyped about a specific game are just going to buy it and see for themselves. Unless it's literally unplayable (as may actually have been the case here), most people won't refund it.
This has been going on for years and years, it's just getting worse over time. It's always some variant of this conversation at $GAMEDEV_STUDIO:
Focus group feedback: Our test groups are noticing 10% of players are running into this bug/issue. It's frustrating them, but there are workarounds.
Management: All of our marketing materials target release date XX/XX/XXXX. If we try to fix this bug we'll have to push the release... How many people will _not_ buy the game because of this bug?
Focus group feedback: Nobody that would have otherwise bought this game would decide not to buy it over this issue.
Management: So we ship as planned, and fix the bugs in a patch.
Over time studios realized that you can get away with much bigger bugs affecting much larger portions of players. Ship sooner, start recognizing revenue, and push post-launch patches to fix the "really bad bugs". It's shocking how bad the quality has to get before it starts making headlines.
I disagree that this is getting worse. Every game of this magnitude of complexity has shipped broken, even way back in the nineties. The Elder Scrolls series in particular comes to mind. Back then, you'd get patches from print media.
The games that didn't ship broken simply weren't that complex. Console games never were that complex. PC gamers gamers accepted this in order to be able to (sort of) play through an experience that was at the edge of what was possible. There was no Digital Foundry to count pixels and analyze frame drops. If you hit 20FPS most of the time, that was considered "playable".
If a game like Cyberpunk 2077 can't ship broken, then it can't ship at all. It can't even get produced. Nobody is going to put hundreds of millions of dollars on the line to maybe ship next year, forever. Nobody except maybe the Star Citizen community.
From a very, very casual gamer's perspective, how did expectations get so high? Consumers are demanding more, companies are promising more, developers are worked to the bone and everyone still ends up unhappy.
Does pricing, pre-orders, or online play have anything to do with this? It feels weird that a sports game can change a few names and ship a $60 title, but a company like CDPR goes through absolute hell and still ships a dud -- what's the incentive for that?
> From a very, very casual gamer's perspective, how did expectations get so high? Consumers are demanding more, companies are promising more, developers are worked to the bone and everyone still ends up unhappy.
The dynamic is the same in films, where all of the studio's profits come from a few blockbusters (sometimes called 'tentpoles'), and in publishing, where all of the profits come from relatively few bestsellers, and got that matter in venture capital, where all of the returns come from a few unicorns.
BTW, you might be interested to note that in film at least, the "work to the bone" component is largely missing (not that film crews don't work hard, they do), and that the film industry is unionized to the hilt (Screen Writers Guild, Screen Actors' Guild, Directors' Guild of America, International Alliance of Theatrical Stage Employees, etc. etc.), nor is pervasive unionization any sort of barrier to incorporating equity compensation (often in the form of residuals) for key talent.
Without too much detail due to contracts/NDA/etc, slipping a release date is even worse of a bother for others down-chain also. There are planned times for manufacturing, warehousing, distribution, all that fun stuff for the physical versions of titles. All that would basically need to be re-dated from scratch. You can't slip one week, you have to slip at least a month. More for platforms that don't use standard disc formats which are not made locally. (Which hilariously, CP2077 already did slip a month before release.)
Even for digital games, there's still approval processes where the first parties would have to test the game out. This process involves scheduling people for it; you can't just go to the front of the line as there are other games that have been scheduled for certain slots. (Which hilariously, it was rumored that CP2077 was given the 'don't test, push live ASAP' treatment.)
At lastly, all payments from the platforms and retailers are based on the actual release date. Unless there's a specific contract, games are not paid until months after release. Physical preorders don't pay the developers, they just help with preventing over/under stocking. (And digital preorders are... functionally worthless beyond the psychological value.) The release date starts the payment timer. When hurting for cash, releasing can start that timer.
The processes above can really benefit abusers who decide that "making street-date" is the most important thing above all other concerns.
Stop announcing release dates until its 90% finished or all major bugs are fixed and you're just 3 months away from being ready.
Publishers are a problem too, they pressure to release games around the holidays, or the console manufacturers do cause it helps sell hardware around the holidays.
From what I gathered its mostly fine on PC, they're a PC shop after all, the console versions needed probably at least 6 months of work to be polished. People were screaming for it to be released no matter what or to stop making excuses no matter how much crunch the devs were already doing. If they released it as an "early-beta" like a lot of games or just said up front, okay we're releasing it but its not finished, so you can play it but you're getting the beta now and we'll be fixing it with updates. I think hardcore gamers would understand. It would just not look good for release sales and I'm not sure if the game media would care.
I waited for the reviews. When they were over 90% I pulled the trigger.
Now I realize I purchased a game that was reviewed on what it will eventually become a la No Man's Sky, not what it was on the day of review.
Sure, the crashing didn't affect me, my configuration was more or less normal I guess. Instead what I got was a hollow game that has a lot of hooks ready for eventual expansion sometime in future patches. That didn't deserve the 91% it had when I first bought the game.
I don't blame fellow gamers. I blame the reviewers.
Reviewers had the access media problem, it was discussed at length on the 1-up podcast. They can't be trusted, especially on AAA titles or they risk being shut out of preview copies on the next releases, which is bad for business. I just wait for user reviews, after a couple of weeks for the hype to die down and people to actually spend some time in it, then I generally look at the worse reviews first. Unless there's a compelling reason to have a game immediately (like its primarily online and all my friends are playing it), its better to be a patient gamer.
That's sort of what I'm alluding to. Personal responsibility doesn't work when you need collective action, so something else needs to step in to fix this. Reminding gamers to not pre-order is pointless.
> Management: So we ship as planned, and fix the bugs in a patch.
What's curious, and I assume therefore legally-reasoned, is the consistent lack of preparation for blow-back by companies. Some part of CD Projekt Red knew the game was broken on older consoles.
It feels like the only real solution to this is to have legal QA documents, signed off on by QA (as factual) and executive leadership (as read and understood).
If there's a magically missing set of older console tests, someone in leadership goes to jail. If leadership publicly misrepresents the stability of the game despite knowing about substantial defects from QA reports, someone goes to jail.
Who cares about video games, but this is indicative of a broader social problem allowing executives to feign ignorance and create systems that deflect blame downward. Either you're running the company or not. And if you are... then the legal ramifications should ultimately land at your feet.
There is also the component of building hype via marketing in order to generate pre-orders. With CP2077, they had made back the entire development costs immediately after launch. This means, refunds notwithstanding, that by the time your customers notice the state the game is in, you are already profitable, and have all the time in the world for PR damage control and patches.
> If people can't delay gratification for something as inconsequential as "non-broken video games"
This doesn't make sense. The game was announced 8 years ago, and it was delayed 8 months. The company said the product was ready and they published a product that was not ready. I've delayed my gratification 8 years for it.
This isn't on the players. That's very much in the same line as blaming someone driving their car for the Gulf Oil Spill.
This is 100% on CDPR management. It's their job to set the right deadlines, to manage expectation and hype.
They failed, and should be held accountable for that failure.
EDIT: Yes, downvote this. Support CDPR's management and their shitty practices with their employees and their lying to players and investors. I'm sure you'll love the games that come about as a result.
I think there were some really big marketing mistakes but most of the backlash on CP2077 seems to be over the insane levels of hype. I had pre-ordered this game a long time ago and it was genuinely fun on release, there are some bugged quests and I don't have a card capable of rendering ridiculously good graphics but it's playable and fun.
From what I've heard the PS release is absolutely worth getting mad over - it's likely that CDPR should have just given up on even attempting a PS release given how poor the performance is but it probably needs some serious investigation to see what pressure Sony was putting on them to make sure it was available.
I wonder how much of this is broken QA throughout the industry. I burned ~60 hours in cyberpunk on a ps4 pro, the content of the game was fun - but the bugs were pretty dumb. Many of the worst bugs originated in story pathways that would only be triggered if various conditions had occurred (which undoubtedly changed during development).
From a testing perspective It seems like it would require an impossible amount of QA time to vet all of the quest paths as a player, and it would be easy to miss game breaking bugs if QA testers were using manipulated save files. Issues like the bad police AI only crop up once in the main game, but are pretty noticeable throughout free roam.
If players want games to get bigger, will we need smarter and more automated QA tools? what would these look like?
> Q: Open-world games are often really buggy, because there’s just so much going on. But I experienced very little of that in my time with Breath of the Wild. How did you pull that off? Was it just a really extensive QA process?
> Dohta: There was another point that we developed during our QA process. We came up with a number of scripts that would basically allow the game to be played automatically, and allow Link to run through various parts of the game automatically. And as that was happening, on the QA side of things, if a bug did appear I’d suddenly get a flood of emails about it. That was one tool that we found to be really handy.
Breath of the Wild used a tool to do automated run throughs as part of their bug testing suite. This is just a quote from one interview, but if you do a bit of Googling you can find some good information about their development and planning process.
From what I understand, they also promised a bunch of features that were never implemented. Maybe they're still to come, but it sounds like they are months away from fixing all the bugs, let alone implementing better AI and other things players complained about.
I bet it would have been met with less backlash if they delayed only the PS4/Xbox S versions. Players must expect some Day 0 bugs from games this large.
> Many of the worst bugs originated in story pathways that would only be triggered if various conditions had occurred
I think the absolute worst bug I encountered was because of this, but I don't believe the story pathway that triggers it was uncommon. In fact, that condition apparently has a major effect on the story later.
Basically it was part of a main story quest where you have to wait a day for a character to call you before the quest can continue. But I never got a call. I saw a lot of threads on the issue, and it seems the bug happens if you did some optional dialogue right beforehand. But unlike other optional dialogue in the game, this optional dialogue was really hard to miss. I had to actively run past an NPC to avoid it.
I ended up losing about an hour of gameplay from that bug :/
I agree with you that it was likely because they changed something last minute and never checked it again; a QA failure for sure. And my theory is the COVID situation made QA testing a nightmare. Still ... that bug was brutal. Every "wait for X to call you" quest after made me anxious.
It's not rocket science. It's devs treating QA like second-class citizens and substituting (cheap) person-hours for proper technical tools.
Any sequence of events can be represented as a directed graph.
Any event check can be validated against that directed graph as feasible.
Instead, Bethesda (equally guilty) and CDPR seem to let their devs add whatever checks, and then trust QA to untangle and validate the infinite number of combinations.
tl;dr - open world games are incompatible with traditional QA methods and tools
The problem isn't releasing a broken game. There's a huge challenge in making and releasing games and meeting a specific quality bar.
That said, knowing you have a broken game and saying it's great is extremely avoidable and totally should stop. Tell me the game is a mess and let me play with it.
> releasing games and meeting a specific quality bar.
Only if you're holding yourself to an impossible deadline. Given time – and, I'll argue, developers who weren't burnt out by the work schedule – and this could have been resolved. But they didn't take that time, they went ahead with a non-functional game just to meet the deadline.
They abused their developers to meet an (demonstrably) impossible deadline. This is a terribly way to run a business on many levels.
I'm not sure that's actually true. Having worked in the industry, games do not always get better with more work and time. Some systems are just too complicated to fix fast than bugs get introduced.
Having played Cyberpunk on a PS4 and then a PS5, It is nowhere near what you'd expect from a AAA title in terms of quality. Had to stop playing it because it's not worth it to ruin the experience.
I can only imagine what the folks at Rockstar are thinking about this launch and what they can take away from it.
And to have a counterpoint, I'm playing on PC and have a lot of fun. I've seen so far only one bug (Panams phone somehow freezed in mid air during her talking scene)
I must say that I'm a bit glad that this time consoles were taken less seriously than a PC, in majority of cases is quite opposite (with a few exceptions like GTA V and CP2077).
My PC experience was good too. I saw the "T-pose on a motorcycle" bug a couple times and I sequence-broke a mission once, but that's it. If 2077 had been enterprise software, it would be the smoothest and least-buggy enterprise software I had ever used by a mile.
It sounds like last-gen consoles leaned very heavily on the LOD system, it caused more bugs than expected, and they didn't allocate enough time to fix them.
I played the game on PC too, and for the first half or so of the game I agree, the first half of the game was more "very very unpolished and rough" as opposed to "broken bugs".
However it got worse and worse the closer I got to the finale, on total I encountered around 10-12 situations that prevented me from continuing and required game restarts and/or loading of older saves, and I had to repeat my final mission 3 times because of a game breaking bug. That is on top of all the minor bugs others have already mentioned.
While I was able to at least get between 50 and 80 fps, the performance was absolutely terrible if I consider my specs (>3500$ PC build).
All in all I was "relatively bug free" compared to the experiences of others. What I don't understand are people saying it would be "a masterpiece" without the bugs, I disagree, the game was very bland in my opinion and I regret my purchase and the time I invested regardless of the bugs and problems. It wasn't TERRIBLE, but I wish I had spent the money on another game.
You are fortunate. I've encountered at least two dozen different bugs that were only resolved with a reload (or in some cases reverting to a previous save). And the crashes, my god. I would wager at least one crash per hour or two of play.
That being said, I think it's a fantastic game (though somewhat shallow in terms of choice) that I am still having a great time with.
Yeah, I'm finishing up round 2 of the game and loving it with a few annoying bugs, but nothing that killed my joy. I'm finding it an excellent experience that's sucked me in way harder than any game in a long long time. I really left heart with the Aldecados.
We saw outsourcing of engines to companies who focused on that as their core competency. Then middleware. As AAA budgets go up to maintain pace with SotA, it makes less sense to trash-and-start-from-scratch (traditional practice).
DLC on online games is the ultimate realization of this. Why sink the cost of rebuilding a game, when you have a battle-tested core, with bugs already fixed, and content tooling already created, that you can start from? Essentially: EA {Sport} {Year} model, for everything else.
> It is nowhere near what you'd expect from a AAA title in terms of quality. Had to stop playing it because it's not worth it to ruin the experience.
Can you clarify a bit whether you were hitting game play bugs or visual issues?
On the PC side there are occasional visual glitches but the game play is relatively stable - I've had trouble with one quest line (the delamain one) and the enemy AI can get stuck sometimes - but it hasn't interrupted game play too badly.
Rockstar is smart in that it they don't hype up their games at all. You won't even know GTA6 is going to be released until the product is 6 months from finishing.
I think this is the crux of the issue - Rockstar seems to be pretty heavy handed in keeping their marketing department in check, it's a thing more developers and publishers need to learn. Early and inaccurate marketing is a liability - quantity does not equal quality and marketing departments generally feel like they're judged based on quantity (and often escape issues with over-hyping in favor of shifting blame onto developers).
> The problem isn't releasing a broken game. There's a huge challenge in making and releasing games and meeting a specific quality bar.
That's true for any complex product. There are reasonable expectations, and indeed laws, about products being fit for purpose. I don't see why video games are unique.
It's possible to patch them after release. This partially explains, but does not excuse, the pattern of games releasing in a broken state.
I'm not suggesting it's unique, but I am suggesting that expecting them all to be delivered fast, high quality, and reasonably priced is a very tall order.
> I hope this will encourage big studios to stop releasing broken games
The game wasn't "broken" at all. I've finished it a couple of days ago. It works fine on the PC.
It wasn't a matter of releasing an unfinished game as in the "early access" model like you are describing. It was a matter of deciding to release the game in platforms that were underpowered, like the PS4.
Even stock poor spec PS4 wasnt as bad as media would led you to believe. It runs at 30 fps with occasional drops to 20 fps and seldom/very rare 15fps, MUCH better than past critically acclaimed games like Control which shipped garbage running at literal _10 fps_. Ps4 will also crash on you once every ~5 hours, probably not fixed memory leaks. Still game saves often so its not a problem, more of an annoyance. Other than that its smooth sailing.
Absolutely the big studios should stop releasing broken games.
From a consumer standpoint - just wait a little. Don't get AAA games at launch. If you wait a few months, you'll get a much more stable game. If you wait 1-2 years, you'll get the game with DLC for $20.
"Don't preorder games" is the new "year of the Linux desktop". I've heard it over and over but have seen no evidence that it's any closer to being a reality.
Most of the population outside of the natural time zone are small rural villages so they just ignore the official time. There is a major city far in the west, and they use two time zones simultaneously. It's a contentious issue, because which time zone you use is basically race-based. Han Chinese use Bejing time, and Uighurs use the local zone.
Yeah people are "lazy" in that they don't want to spend precious few free hours of their weeks grinding out a gimmicky skill-set that is at best tangential to the actual work they would be doing.
I understand there is no simple solution to how to hire software engineers, but you should be able to recognize why a lot of us don't like the current status quo of interviewing (even if you thrive in it).
If I were building a system that had to deal with large amounts of tabular data that isn't directly consumed by humans, JSON wouldn't be my first choice nor my last.