donderdag 18 december 2014

final recap - The good, the bad, and the ugly: another big wall of text

It's been a while since our previous recap, and, with the final presentation tomorrow, it's time to review our work, what went good, what we could and maybe should have done differently, and what we most definitely shouldn't have done.

In this blogpost I will be mainly focusing on the reports we wrote, and won't go into the details of the implementation and art, these recaps will be covered in different blogposts.

Let's start of by reviewing the plans we had at the end of our previous recap, then we move on to how are last reports fit into those plans. Finally we will go over what we have learned, what we would have done differently, and which things might be touched upon in the future.

The rough roadmap of our previous recap

For reference our previous recap can be found here 

After going over our first two reports, we broke down casual games into three subdomains, core game mechanics, interaction mechanics and aesthetics. These would act as our framework to position our evaluations.

Now we left you, the reader, last time before we actually released the game in the wild, we wanted to implement analytics, furthermore we wanted to improve the learnability and the gameplay mechanics. We established that the learnability at the time could be improved, which was our next focus point.

We also mentioned the following longer term areas that could be improved.

  • Evaluation and tweaking of the core game mechanics. 
  • Experimentation with non-tap game controls on touch-screen devices. These could be for example swiping and instant lane switching. 
  • Several proper standardized questionnaires related to the current state of our game.

So we'll take a look at what we have improved and evaluated since that last wall of text:

Evaluations

Evaluation 3 


Our first goal was to improve the learnability. 
The main problems were, players not understanding how to react to the big and small dinosaurs. Furthermore the old tutorial screen did not explain the multiplier. 

We theorized that this could be accomplished by improving the simple tutorial screen of the game, which would explain the different mechanics to the (new player). Also the visual feedback the players got when eating a small dinosaur was improved by adding a small sprite above the head of our top hat dinosaur, and having this same sprite featured next to the score.

The tutorial was changed from:

to:



we had 18 people evaluate these changes to determine whether the controls and the gameplay was clearer now. This seemed to properly explain the entities, the small and big dinosaur etc, but the multiplier however wasn't clear to most participants. 

Thus we decided that the multiplier would be something that needed improvement in next evaluations.

Furthermore several people commented on the control layout, suggesting that others controls, that did not involve tapping, but for example swiping, would be more intuitive and perform better. Since we did not test these lay-outs yet and we wanted to be able to properly determine the best control scheme, this was decided to be something a next evaluation should be spent on as well.

Evaluation 4


To end the discussion about the best control scheme on touch devices once and for all, one final evaluation considering a total of six possible lay outs was held. 


The controls schemes: 
  • left - up; right - down
  • top - up; bottom - down
  • left - down; right - up
  • dragging
  • swiping
  • tapping under the playable character - down; tapping above the playable character - up
We had eight people evaluating the different control schemes, per control scheme

they:
  • got an explanation, explaining the control scheme
  • played the game several times with the control scheme
  • answered a couple of questions about the control scheme.
We found that dragging or swiping, performed poorly, and that the original proposed control scheme performed best according to our players. A close second was the top/bottom tapping. We thus decided to leave these two schemes in the game. 
Unfortunately we only tested it on a smartphone, thus we were not able to determine if the screen size has any effect on the preferred control scheme.

Evaluation 5


I mentioned earlier that at the end of the last recap we wanted to release our game in the wild. We added analytics to both the android version, and the web version, and the released it officially in the wild on the 27th of november, with both a release in the playstore and announcing its release on facebook. 

The results of this 'release in the wild' were analysed on the 7th of december, in this report, report 5, Now we probably should have had a clearer intention of what exactly we wanted to find out in this evaluation, however we did not define our objective that precisely, this made it somewhat difficult to properly analyze the data, and form a conclusion, based on the analysis.

We tried to derive the satisfaction of our game from the length of the game sessions and the the amount of retries in each session. Few to no retries would indicate a lack of interest in the provided gameplay, thus indicating that our game might be boring or too frustrating. Extremely short game sessions would indicate our game might be too difficult, on the other hand, if we would only have really long sessions, the game might be too easy.

One problem we had, was the amount of data, unfortunately creating virality is hard, more on that later, thus we did not have as many results as we would like. Furthermore the results that we had, might have been skewed by ourselves and close relatives, that played the game significantly more than random players with which we had no personal connection.

We did conclude however that the game is either too difficult or not entertaining enough to keep playing, this was further underpinned by comments during previous evaluations, the game difficulty increased  to quickly. 

Thus we decided to make the game somewhat easier, by limiting the spawnrates, and reducing the maximum speed slightly. 

Evaluation 6


Our final evaluation focussed on evaluating the current state of our gameplay with the standardised Game Review Questionnaire, which can be found in the appendix of this report.

We wanted to know the current state of our game and more specifically if our game was actually fun, 
especially since we tweaked the difficulty somewhat. The game review questionnaire has several broad, multiple choice questions in which participants can state whether they think aspects like aesthetics, gameplay etc. are okay, and what should be improved. Overall we got decent scores, though we also found out that are game could improved in both length / content and fun.

In a way this was somewhat expected, areas we focussed on previously scored definitely higher than the aspects we haven't focussed on yet. This in itself is a good sign, meaning that time spent on working on those areas was not spent in vain. 

We also somewhat expected the lower score in content/length. Originally we thought of different modes that could be included with dinotophat, for example a story mode which would have levels and such.  At this moment the game only has one game-mode, the endless mode, and while it is fun, it is not too addictive, and I wouldn't be surprised if people lost interest in it after playing it a couple of  times. The achievements and highscores might add a completionist and competitive element that people might enjoy, however to say this would be enough to create a good replayability value, would be incorrect in my opinion.

Future work on the game judging from this evaluation should thus be focussed on adding more content,  and possibly balancing the game more, such that players enjoy a good mix between satisfaction and difficulty. 

Summary of the reports

As with any artistic endavour, one could say that such a project is never finished, and clearly we still have many possibilities to explore, if we wanted to improve this game further. If this would be worth the time would be a different discussion though. 

So far we have build upon each evaluation to give us direction which aspect of the game to improve next. We started out with determining a good control scheme for our touch version. Interaction with the casual game is the very first thing that will leave an impression on a gamer that has just downloaded the game, thus we deemed it important to get right in the beginning. 

In the second evaluation we continued gathering insight into users opinions of our controlscheme choice as well as start exploring the learnability of our game. 

The third evaluation focussed on evaluating our changes to the learnability, putting to test our conclusions of the second evaluation.

The fourth evaluation was used to make a final decision about the control scheme for touch devices, having tested several different lay-outs we are now confident we picked the two lay-outs that most players can appreciate. 

The fifth evaluation focussed on the data gathered from our in the wild release, which we used to analyse the difficulty and fun of our game.

and finally the sixth evaluation focussed on the current state of the gameplay mechanics of the game.

Compare the first minimum viable product, with the current game and you'll see a definite improvement.

Which leads us to:

Current state - the good, the bad, and the ugly.

One course, Six evaluations, a lot of development time and drinks containing caffein later, where are we now? 
Since evaluation 6 we added a day and night cycle, which we hypothesised would help with giving players visual feedback from the multiplier, unfortunately we were not able to properly evaluate this. The final version is playable here.

So time for some conclusions, let's start with the positive:

The good


We have a game, it looks nice, it is playable. It has Google Analytics. It has a tophat. It is published on the google appstore. I think we definitely made something we can be proud of. 

Besides the final product, we have learned a lot about evaluations, questionnaires and the difficulties of designing user experience, which beside being the goal of the course, is definitely worth something in future endavours.  

The bad and the ugly

Of course, we could have done a lot better, so let's review the major ones.

Our participants. 

Almost all of our participants were either recruited from people that spent there hours at the A-building or are people we personally know. The goal of testing with testers is to evaluate your product such that you can reason about how your whole goal public thinks about your product. If your participants don't properly represent your goal public, it's harder to properly reason about them, if not impossible. 
Our participants were probably not an accurate representation of our goal public, and thus our conclusions might have been somewhat skewed by this.
Which leads to the next point.

Virality

Virality is hard. Making people play, and talk about a casual game is definitely not one of the easiest task. We originally hoped that just by putting it on the google appstore, talking a bit about it and asking friends to play it, it would generate enough publicity and interest to do well on its own. It didn't. We did try to promote our game a bit, we made a facebook page, an imdb page, created a indiedb account. We promoted it on reddit for a bit, and talked about it on facebook. Still it did not pick up. Virality is hard. 

If we were to develop a game again, we would need to learn a bit more about social networks, and how to properly market a game, such that it goes viral, or at least goes more viral than it did this time.
I think if we had thought about a media campaign, identified the major social networks that we should target, preparing enough material to actually post there and keep it fresh, we might have done a lot better. We should have started to promote our game earlier as well, now we only started near the end, wasting precious time to get the game to be picked up. 

Of course this would have meant a lot of work, that we now had spent on the reports, questionnaires and development, each thing has its good and bad points, I guess.

If the game had gone viral, we would probably have had a better test group as well, which would properly represent our complete goal public.

Reports

Another thing that hindsight made very clear, it is good to ask the same questions, each time you do an evaluation about the same subject, whether this is a standardised questionnaire or not. This makes it a lot easier to compare the results, and actually determine whether you improved the game or not. I would even say it would be nice to do two sets of questions each evaluation (or possibly two evaluations per cycle) one in which you explore / determine the current state of the feature you want to improve next, and one you evaluate whether the proposed changes to the game actually improved your game in comparison to the previous explore cycle. 

If we were to do this again, with the information we have now, we would probably also include 
a standardized questionnaire in more evaluations. This would allow us to track our improvements better. During the development we figured that we would improve the game first, before doing a more general, standardized questionnaire. We thought that our game wasn't ready for such an evaluation. If we would have tested more often with such a questionnaire, we could have had some really interesting reports from which we could conclude which changes effected the game positively, and which negatively.

Minor other things

If we had spent more time at the beginning to properly optimise the way we collected and processed
our data from the questionnaires it would have saved us some time.
We probably should have read up on some game mechanics theory, and casual games as well this would have allowed us to create a better framework at the beginning, and could have guided us when we determined which aspect to improve on next.

What is left to do?

Now that we have looked at the things we learned during this course, and the development of DinoTopHat. Let's take one final look towards the things that we might improve on, if we were to continue development. 

In order to increase both satisfaction and learnability, we hypothesised that this could be achieved by giving even more visual feedback to the player, in the form of juicyness (in reference of this video). The day and night cycle was one aspect of the juicyness. 
Other improvements could be to have more graphics effects when a player eats a dino, or dodges an
obstacle/enemy. Improving the sounds, adding visual effects to the multiplier might also help. 

To improve the length and content of the game, more modes could be added, for example a story mode might be a nice addition, which would feature levels and a story arc (possibly involving the tophat). An other big feature that we wanted to add during the brainstorm phase, was evolution. Eating a significant amount of smaller dinos would make your dinosaur grow and possibly evolve, allowing it to eat even more (smaller) dinosaurs. 

Finally we could also add more social media interaction to the game, this could improve the virality of the game, as well as allowing players to interact with friends and strangers, thus creating a community around DinoTopHat.

Closing thoughts

Overall I think we created a decent casual game, especially considering none of us had any experience developing casual games. Could we have done it better? Hell yes, but most projects can be done better given hindsight. Given the review of our evaluations and the bad and ugly conclusion points, 

I think we have learned a great deal along the way. I personally think that the fact that we would probably change significant parts of our development process if we were to do it again is an indicator that we have learned. This is I think just as, if not even more, important as our final product.

So finally a thank you is in order for our professors and guest speakers of the course fundamentals of HCI at the KUL and of course everybody that has read this blog. 

Tomorrow will be our final presentation (which will most likely be uploaded to this blog as well). But content wise I think this will be the last major update, thank you for following us in this development process, and we hope to see you again, be it in the flesh, or in pixels.

Update:
link to the presentation can be found here.




zondag 14 december 2014

Session blog: NASCOM

In this week's session an employee of NASCOM came to talk about his job and taught us a few important lessons. In this blog we'll recapitulate on his talk and tell you what we've learned and remembered.

The first part covered model thinking. This means that humans, when thinking about something, have a certain image/model in mind. For example: When we think about the world map, Europeans picture Europe and Africa in the center, while Americans picture the pacific ocean in the center.

This is an important fact when designing. People expect a design to match their expectations. The most interesting part is thus how far we can stretch different designs and explore the boundaries of each design.

The second part describes his job at NASCOM, they help companies design certain projects and help build them. The biggest problem appears to be, that the views of certain things (goals, users,...) are not always shared by different employees. One of the first things he does when assigned to a project, is talk with all the people in one room to make sure that all visions are the same. Communication is a very important factor.

The main things we remember from his talks are: People think in models, the view of a product must be the same among all employees and communication is very important.

It was very interesting to have someone of NASCOM come and talk about his job. Our education is somewhat related to this topic and it might become one of our jobs in the future.