With some experience, we enumerate some ideas regarding game design and some of the more common pitfalls to avoid when creating a game. This section will be split up by game genre with some general pitfalls that apply for all genres being mentioned on this page. The writing style will be kept mostly historical such that regardless of anything that might be interpreted as an opinion, even if there is proof to that end, the pages will contain references to interesting events in game history that would exceed the context of this section.
If need-be dispelled, creating games pertains more to being an artist than a programmer. Typically, if you are one of those people that sits through game credits, you will notice that for a major release, there are a handful of programmers and a battalion of artists.
Truth be told, historically game design was more about one game developer roughing it out on their own at a computer and recreating everything starting from properties of physics, drawing the actual art and then writing the code that fits everything together.
In the current year we now have "game design environments" such as the Unity game engine that do not even require coding and where props or assets are just arrange by dragging the stuff onto the scene. Similarly, tackling problems of physics, mostly mechanics and Newtonian physics has already been done to death by now, formalized and packed into library APIs that are called game engines or physics engines that just need to be called when necessary, without needing to derive own formulas to compute some of the most basic of principles short of micro-optimizing physics equations.
With all that said, the onus now has shifted onto art and not only in terms of graphical artistry but also in terms of the quality of writing. Adventure games along the lines of "Broken Sword", "Full Throttle" or "Leisure Suit Larry" were not cherished because they had the very latest 3D animations but due to the humor that was specific to the writer that often provided a hearty chuckle and allowed the player to carry on playing without getting tired or feeling overwhelmed.
Art can be bought in case the game creator is a writer and not a graphics artist, which is perfectly acceptable but graphics art can be costly. However buying art and creating a boilerplate game results in an "asset flip" where the game itself is pointless because it just contains assets created by someone else that are then animated by a game engine but with little personal contribution of the author. AI-generated art might seem to come to the rescue, but due to recycling past ideas, players tend to experience an eerie feeling of a tight game-loop with AI-art and rendering costs a lot of money when you realize that there should be lots of takes and shots of the same objects or enemies from many angles.
To sum it up, becoming a game creator is more about the artistry, be that both in writing or in terms of graphics rather than technological prowess, which is how the independent ("indie")-game scene came to be, with many titles being blockbusters even if they were created by people with no experience with programming.
Contrary to common belief, and perhaps mainly due to superficial knowledge of the market, people that play games are not too enticed by the quality of the graphics. If that were true, then the creators of angry birds (Forbes) would not be that wealthy. One very recent flop at the time of writing (2025) that seemed stunning is that of Frostpunk 2, which was a highly acclaimed and expected game due to Frostpunk 1 being a massive success. The difference between Frostpunk 1 and Frostpunk 2 is a whooping rating score difference between 9.7
and 7.1
(from https://steampeek.hu).
We would wager that the decision of the developers to bump up the graphics requirements between Frostpunk 1 and Frostpunk 2 was a very bad decision given that this is a top-down city builder with "colony" and "simulator" elements such that nobody truly cared what the graphics were like. This is without mentioning that retro-gaming elements have not re-surged as a source of inspiration such that many games now contain pixel art and various other elements that could not be described as the bleeding edge of graphical capabilities. Whilst Frostpunk is mentioned in our gallery of unique games, there are memorable flops such as "Doom Eternal" where the developers lost a decade to making eye-candy before finally releasing the game, such that by the time "Doom Eternal" came out, players were not even tracking the game anymore such that the sequel to a very popular title that did not receive the expected attention relative to other games on the market that did not obsess about graphics and released what they had sooner.
In fact, it might be the case that similar to robots, games in general have a similar "Uncanny Valley" as coined by Masahiro Mori (a robotics engineer), where a human's response to a game with characters that look too realistic is, counter-intuitively to those that push for better and better graphics, mostly negative instead of being positive. In other words, the more realistic a game gets, the more a human being is triggered to the point of not really wanting to play the game at all. The same "Uncanny Valley" principle blends well with engineering or games that have an extremely high degree of complexity, that counter-intuitively, end up becoming a niche due to the tasks to be carried out in the game resembling too much like "real life work".
"Eye Candy" and bigger and better graphics are the equivalent of "quantum mechanics" for people that are not even physicists by trade nor could they solve an equation for the life of them yet take solace in discussing a domain that does not scale with creativity, nor ingenuity, nor literary capabilities but with the sheer amount of resources that you can throw into a blending-machine that churns and crunches numbers more than often to approximate curves and make things seem nice and round.
Before graphics cards even existed and after they started appearing, they seemed counter-intuitive to most gamers, such as Commodore Amiga owners, because the graphics cards did not seem to change too much visually. Even console owners already had "3D graphics" aplenty and it was not really clear what this new device on the market would do. The idea here is that of convenience and whilst Donkey Kong was motion captured and turned into sprites making the experience look 3D for all intents and purposes, graphics cards helped game developers by only asking of them to generate a frame, after which the computer rendered the scene on its own without requiring the developer to be able to translate 3D graphics to a 2D scene (for instance, having the artistic perception of knowing where and at what angle to place a shadow on a drawing given multiple light sources, something that a graphics engine and a graphics card would be able to). Even trivially, only very recently, aside from color-depth, monitors do not have a depth component and everything is actually displayed in 2D, so what could 3D graphics possibly bring to the table as "new" if it cannot be experienced in three dimensions? True 3D as in virtual reality or augmented reality by using various optical tricks is only becoming a reality just now but all games that everyone plays to this date except VR/AR are really 2D. The difference is that earlier games either used motion capture and then transformed everything into sprites or hired graphics artists that were knowledgeable in painting scenes with the correct placement of shading and shadows to make the scene look 3D.
Either way, it must be noted that the benefits from having the latest and brightest in terms of graphics technology are conditioned by a bell-distribution where most of the time, if you find yourself after the 2000s, you're already past the point of diminishing returns. In fact, one of our favorite conspiracy theories is that lots of developers and game companies strike a chain of deals that lead up through various layers of software creators in order to push for certain operating system or graphics card upgrades. The reality is that even if one would take away most of the advanced graphics for most current games, most people would still be happy to play them and, to end in a circle, note that "Angry Birds", a game with no claim to graphics was played by a wide array of individuals, not only hardened gamers that "could put up with the modest graphics more than casual players".
Recent years have let politics hit the gaming scene and one of the talking points has consisted in terse debates on the level of difficulty in games. One camp considers that players should get better at gaming in order to surpass the baseline difficulty setting implemented by the game usually by refusing to allow the player to set a lower difficulty setting and the other camp considers that the difficulty settings should not really matter because games are just a matter of entertainment.
Both parties are right and in different contexts. In a competitive gaming setting, the same default setting is applied to everyone participating in order for everyone to have to beat the same difficulty level. Similarly, in terms of achievements and online rewards, very often the achievements are locked to a given minimal difficulty setting in order to not allow players to switch to a lower difficulty setting and then obtain the same rewards as a player that has played the game on a higher difficulty setting. Otherwise, even touching the point of sharing "Save Games", something that we offer all the time, having saves around that allow players to even skip large portions of a game is something that helps game journalists, vloggers and others that would like to replay or revisit a specific scene in a game (it's why we invented Horizon, in the end!). Similarly, a common design pattern is to create a game that can be played by a single player but then also played via a network by multiple players, such that having a save game that completes the game progression and offers a completely explored map is a blessing to those that want to load up the game and say, have a versus match without having to replay the story again.
However, perhaps the worst part of these political debates is that they have shifted the game design into a stage where players would not know what to pick because there is no baseline anymore. Even though, one would claim that this is great because then players would be able to just mix-and-match what they like in terms of difficulty, especially for games that allow partial customization of difficulty levels, it becomes unsure what the creator of the game intended to write. Games are a literary art-form expressed via technology and many games, in particular those that have a story-mode progression are told like a tale or read as a book, such that the actual difficulty level becomes tied to the literature. If you've played the Metro series, it is clear that the lack of ammo, combined with the lack of oxygen masks and the difficulty of the enemies renders a bleak story that emanates survival under conditions of desolation and despair - which is exactly what the whole literature around Metro seems to revolve around as a post-apocalyptic artwork. If the difficulty setting would have been pushed any lower, given human subjectivity, players might have remember the game as a walk in the park and not a harsh game. When we play games, we always check for the baseline, or, in the formula found often in games, "the way it was meant to be played" and that is the difficulty that is first attempted.
Lastly, game difficulty settings have historically been a matter of number crunching than, well, a more all-encompassing notion of "difficulty". That is to say that for a game developer to make a game "more difficulty", they would typically bump up, say, the amount of life points that an enemy has or increase their damage and even cripple the player units. However, all the former was just a matter of addition and subtraction such that the game got "more difficult" but mostly only in terms of scale. A more "all-encompassing" notion of difficulty would also include, say, more difficult puzzles or other unexpected turns of events, something that is not too easy to realize technically in a linear progression. Later, more advanced games, but with increased costs manage to adapt to the player in order to understand their playing style and to dynamically create challenges. We suppose that the takeaway lesson is that number crunching is not the pinnacle of "game difficulty" and that "difficulty", in general, could engage the player also in terms other than sheer endurance.
We have established that games are really just literature with more engagement than a paperback book. However, just like books, reading thousands of Barbara Cartland romance novels that just include permutations of characters with little takeaway and written in a simple vocabulary for all to understand, is way different than reading less but referential literature, such as philosophers, writers etched in history, or other grand-works that constitute the fundamentals of culture.
With that said, just like books, it is important for people that are starstruck by the superficial to understand that the actual quantity of games played does not matter too much, that some games are more referential than others and that every player should ask themselves whether they should sink their precious lifeline into playing a certain game. Going further, being too demanding of players, making the story arch too little or the game loop too small, while claiming that players are ungrateful for abandoning the game up to telling them to "git gud", is just pretentiousness at work because a game might just be intrinsically bad relative to others.
In some ways, this is equivalent to Hollywood producing a bad movie that nobody wants to watch and when the creators realize that they have negative income they try to blame everything from "piracy" to "hackers" for not receiving the income that they expected on paper.
The early versions of the Assassin's Creed games expected the player to perform a fixed set of limited tasks, in one city, then another, and then another, and, well, then yet another city, and so on, for many other cities or towns. Any players complaining that the game is really repetitive and dull would have been met with a political backlash that ostracized them for being too superficial or, hating the fact that the main game-developer is a woman, but without even bothering to look upon their creation with some constructive criticism. In fact, the repetitiveness of Assassin's Creed extends over no less than the first four-or-so games, with little variation except for technical enhancements such as graphics or improvements to the physics engine. Another example reaches back to "Prince of Persia: Warrior Within" and interesting wall-grapple game that, just like its predecessor in the same saga, very nicely implemented time-shift effects where the player could roll-back a scene. Unfortunately, aside from the time-shift innovation that is a hallmark of "Prince of Persia", the "Prince of Persia: Warrior Within" had a map layout that expected a player to reach the top, gain some power, and then perform the exact same missions but in reverse, yet with the help of the extra power. The former decision to expect the player to repeat the game map in reverse made the game extremely repetitive such that few players actually plowed through the same maps again.
In the end, expecting too much of a player, and even going overboard to call them superficial for not comprehending the depths of your work, is a bad design choice. Often, the pretentiousness can be observed as, say:
Clearly, all games are unique and they end up generating a niche, regardless how wide and broad that niche might be, however making a game too pretentious is a guarantee that it will tend to receive attention from a select audience and not from a general audience.
Perhaps related to the steering design flaw, a related problem is the AI for Non-Playable Characters (NPCs) and perhaps even the game design itself that is an adversary of the player and also happens to be "cheating".
A very good example of "cheating AI" is perhaps the games that appeared after Far Cry 3. In games after Far Cry 3, typically when using a sniper rifle and given the game mechanics, shooting enemies at a great distance, the enemies become alerted and somehow magically know where the player is as if they've performed a full ballistic analysis to determine from which direction some random dude got popped. Even though we mentioned that "realism" should not matter too much, the idea here is that this lack of realism hurts the gameplay. For Far Cry's case in particular, the fix is easy and that is to revert to the old behavior where after a shot from the player when the player is out of the sight of the enemies, the enemies would form search parties and start to comb the area to find the sniper by going out in all directions as one would expect. After Far Cry 3, the sniper rifle just become interchangeable with the flamethrower because if the enemies already know the exact position of the sniper, then there is no point to using that and switching to some larger firepower is much more sensible.
The idea of cheating AI also dates back to Final Fight 3 and some other brawler retro games where bosses sometimes do not follow the general pattern of attacks that all other mobs follow - including, having a larger attack range, being able to break locks or being able to enact locks that the player cannot break, and so on. Double Dragon is another example where the player cannot break any combo of any enemy NPC and when the attack sequence starts, additionally providing that the player is successfully hit once or twice, the player cannot parry the enemy attack anymore and just has to sit there and watch.
The main design recommendation here is very simple to sum up. A game should provide some rules, a toolkit for the player and then let the player play the game as they want to without allowing the AI to have much more privilege over the player unless it is a very particular case like an important scene. This would allow the player to generate their own solution to problems whilst still being mindful of the rules laid out. Changing the rules is simply moving the goal-posts and goes back to being too pretentious or, as mentioned previously, steering where the player stops playing and just becomes a spectator thereby defeating the point of an interactive game.
Except perhaps adventure games, or Arcade brawlers, most games rely on a game loop that represents a set of actions that the player will be carrying out till the end of the game perhaps with a change of environment.
For instance, we mentioned Assassin's Creed in the pretentiousness section and Assassin's Creed is a perfect example of what a game loop consists in. The player is supposed to repeat a series of quests or missions that are very stereotypical and similar to each other across a number of maps in order to reach the end of the game.
The problem is that a game loop that is "too tight", ends up with the player realizing that they're ally just repeating a set of actions over and over again, such that the player ends up bored due to the repetition and might even give up on the game. However, making a game loop looser implies more development and resources required when the game is made in order to make the game larger. Point-and-click adventure games, for example, have an infinite game loop because the entire game is a linear progression from start to the end without any repetition while following a story-line such that repetition might just be part of humor but the player will never do the same over and over again.
Interestingly, one method of fixing the game loop problem has been to use generative algorithms that are very similar to AI in order to generate random environments in order to expand the game. For example "No Man's Sky" uses generative algorithms to generate random terrain and distributes quests or missions following a template across an entire universe of planets, each with their own biome and environmental characteristics. One of the main sales pitch for "No Man's Sky" was the "endless universe" that space-game fans have been longing for without the need to make the game huge and by relying on randomly generated environments using a templated pattern. However, what happened was that the playerbase that was heavily vested in games like "Eve Online" or other involved space games, immediately sensed the "game loop" that in spite of being designed infinitely large actually converged due to a limited amount of quest types and "things to do". It is true that the players could explore some randomly generated space, play around with ships, upgrade them, shoot rocks or defeat random enemies, but it felt like the whole experience was going nowhere given that there was no actual story arc.
Perhaps one of the dangers of making a game loop too tight is to underestimate the player base, especially in corporate settings where employees might be technically inclined but would not have a track record of playing games.
A quick one and mostly related to marketing but one of the worst advertising decisions is to showcase a game by showcasing screenshots that are taken during movie cut-scenes during gameplay.
The audience already knows that this is a CGI-driven world by default so showing off with images will not really be showing off. Additionally, the producers of the game might not even be the ones that create the best digital art in the world such that picking a "fight" and competing with digital artists in terms of images is a fight not worth fighting given that the purpose is to advertise for a game that is interactive such that "stills" are not showing off the best feature of a dynamic game (where "stuff moves around" instead of being still).
Similarly, the cut-scene stills tell players nothing about the game and many times can be treacherous information because the game could be great but the graphics might linger so instead of acting as an advertisement booster, showing screenshots of in-game movies cut-scenes will act as a deterrent to players looking for something entertaining. Cut-scene stills or screenshot of game characters or memorable places within a game should be kept for vanity items, memorabilia or at least post-gameplay content that ensures that the game is still discussed by players after everyone has played. Otherwise, unless the game is a sequel or made for a popular movie, then nobody would even know who those people are and the memorable places will not be memorable because no memories are made yet by the person viewing them.
Developers and fans live-playing the game to showcase it is alright for people that are very socially inclined but very much better than showing screenshots taken during cut-scenes and much better than screenshots in general. Perhaps the best way to showcase a game that tells the entire story to a gamer would be either a no-commentary playthrough or game-play sequences within the game that highlight the best game mechanics.
Durability, the process through which items, weapons and armor gradually decays, either by by being used, taking damage or decaying as a process in time, was originally introduced for the purpose of balancing the internal economy of games. The idea was that after a while certain items needed repairs such that players would have visit an in-game store or use a repair kit in order to repair their items. In doing so, the player expended money and, in turn, by expending money the individual progression of the player was slowed down. Later on, games used durability to fluidize the game, in the sense that once an item was broken, another item had to replace it such that the player was tasked to find a new item to use.
One of the problems with durability is that after a while, scaling with the equipment and with sufficient gameplay, the wearing and tearing mechanism started to be a detriment to playing the game itself, with players considering that a certain mission or quest will incur costs in terms of durability that will not make the mission or quest worth playing. Memorable mentions are World Of Warcraft in the early versions where a raid mission would take a very long time and the durability costs given the equipment required would be very high, to the point that many people simply did not want to play because they feared the amount they would have to pay out just to repair equipment. It really ended up in funny scenes with players removing their clothes altogether and fighting naked because there was no durability penalty for fighting naked such that after reviving they could pick up their clothes and just walk away.
Either way, for very modern games, durability is frequently encountered but it is not used properly all the time. Most of the time, durability is just thrown in to be part of the difficulty curve, which is mostly inappropriate because it does not make the game more difficult but rather tends to make the player more conservative in terms of gameplay thereby preventing them to take enough risks to explore the entire game.
For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.