Follow me on a brief thought exercise:
It’s 2016. You shoulder off your jetpack and head inside your gleaming, silver podhome in New New York City, Mars and plop down on your couch to play some videogames. Your choice of consoles includes the PlayStation 3, the Xbox 360 and the Nintendo Wii.
Each system has a robust digital distribution storefront, fantastic operating systems and an immense library of games. As your robot housekeeper fixes dinner you enjoy a brand new game on one of these old consoles, made by a great developer.
Does this sound awful?
OK, forget the sci-fi stuff and just imagine a near future where the big three console manufacturers have decided to forego the technological arms race to continually refine their current generation of systems (that’s even crazier sci-fi though, isn’t it?). The idea of console generation stasis, despite the fact that the market makes the very idea impossible, could actually be sort of great.
That might sound a bit crazy but just hear me out a few minutes longer. A great deal of ground is lost in the forward-backward shuffle that every new launch involves. Even as technology improves, allowing for higher quality environments, better animation and everything else more powerful components enable, the work that goes into stabilizing hardware production and optimizing operating systems/network features takes a lot of time. Imagine if that time was spent on refining the feature set of a company’s existing console: the dividends for consumers/players would be immense.
A longer generational cycle wouldn’t hinder innovation; it would just force developers to work in a different (possibly more rewarding?) way. The current model sees mainstream development teams constantly readjusting to new technology and learning new methods of videogame creation instead of refining their existing techniques or nurturing creative concepts. There’s evidence for this type of thought: some of the most inventive and highest quality games have arrived near the end of hardware generations. This is in no small part due to the fact that by this time, developers are looking less at impressing via spectacular technology than they are in impressing via spectacular gameplay.
The PlayStation 2, in particular, saw some of its best loved titles arriving during its twilight years (God of War and Shadow of the Colossus both released in 2005) because, by then, the intricacies of the console had been fairly well mapped. Developers understood the technology they were working with and their creativity was honed through its confines. When the PlayStation 3 was launched in 2006 it languished with launch games like the fairly lacklustre Resistance: Fall of Man and a bevy of titles ported over from the maturing Xbox 360 line-up while the same year gave the PS2 gems like Bully and Okami.
As the PS3 struggled to create even a modest line-up of worthwhile titles, the aging PS2 was faring far better. Compare 2007’s PS2 release of God of War II, Shin Megami Tensei: Persona 3 and Rogue Galaxy to the same year’s PS3 releases of Lair or Heavenly Sword. All of these examples — not to mention the fact that every console’s “launch line-up” is a bit of a joke — shows just how hard it is for developers to translate their skills to new technology. Couple this with faulty initial console production and troublesome internal software and it seems that maybe generational stagnation (even by just an additional four or five years) could lead to great things.
Those wanting to stay on the cutting edge would still have the PC market to satiate their needs (much like they do now) but general audiences would be able to play more inventive games on more stable consoles. Many of us would be able to spend more money on software, less concerned about the cost of new hardware and the looming obsolescence of our current library. That doesn’t sound too bad!
As videogames march headlong into the future, it’s worth taking a breath and wondering why technological progress just for the sake of progress is always assumed to be a good thing. Couldn’t the industry benefit a bit by allowing creators to focus more on still-evolving theories of game design rather than force them to play a constant game of catch-up?
Reid McCarter is a writer, editor and musician living and working in Toronto. He has written for sites and magazines including Kill Screen, The Escapist and C&G Magazine, maintains the videogame blog digitallovechild.com and is Twitter-ready @reidmccarter.