Building Alliances: Self-Assessment and Quality Assurance

Self-Assessment
Last week, in session, I cross-referenced the work I have produced for this project to the Building Alliances Rubrick, the criteria that this work will be assessed against. The Rubrick details nine criteria that the project as a whole will be marked upon, which we were shown at the beginning of the project and have been available for us to see since. 

I found that, when looking at the module learning outcomes, they were written in a way that seemed incredibly vague, mentioning in broad terms how pieces of work may be "Satisfactory" to the project as a whole, and how other pieces of work that "Exceed Expectations" gain more marks than those that do not. The Rubrick does not give any definition for as to what a "Satisfactory" piece of work looks like. In this manner, when I self-assessed my work based on these Rubricks, I found it difficult to judge whether or not my work was "satisfactory" or whether or not it met "expectations". It would have been easy to simply claim that yes, my work did meet these expectations and would attain a high grade: the vagueness of these marking terms seems, to me, to promote personal interpretation. When I was looking at my work I generally felt confused with what exactly it was I was assessing, and felt I had to be honest with myself and look at the work I had produced in an objective manner.
This presented its own set of issues; I do not know whether or not I was too conservative in my self-assessment, underselling the work I had done, or whether I was overselling the quality of my work and predicting myself to have an erroneously high grade. 

I generally marked my work as having achieved the quality of a low or high 2 in grade. I feel that my work has contributed to the project as a whole and that I have been a satisfactory member of the group, contributing what I can to my work and the work of others. I strongly believe that I could have done much more, however, and am not entirely happy with the quantity or quality of the work I have produced. Particularly post-formative, I feel my work took a sharp dip in quality, when I was making low-poly 3D models for the group that took exceedingly long for their simplicity. I was trying to learn and use a 3D software - Blender - that was relatively new to me, and I feel that my work stagnated due to this. I additionally feel that I generally did not utilise my role as the "2D Lead" well enough: I feel that we should have spent longer on the design process, which I am more comfortable with, and that I should have been more vocal in deciding aspects of the game and its design. A lot of times, the designs for enemies and towers were chosen due to their simplicity: though this was a valid consideration when put against our Target Audience, this was a general style that I was not comfortable with. As the designs chosen were often not those I had created, I found it difficult to catch up to them in terms of target audience considerations and feel I could have benefitted if I had advocated for more time to experiment. As such, I fear that much of the work I have produced does not meet the target audience expectations, and my contribution to the group has been stymied as such. 

I feel that other aspects of my project, such as considerations towards how assets we have produced may fit into a game, and explanations regarding my process and techniques, are comparatively stronger. One outcome is also dedicated to our environmental impact statement and research, which I feel somewhat confident in, but which has not influenced the project in a greater regard to what we have produced. 

On the whole, I definitely feel that my contribution to the project has been weak, though again am uncertain whether or not this assessment stems from uncertainty and a more conservative approach to my self-assessment. I do, however, feel an obligation to recognise my personal shortcomings in my contribution to the group project, which I feel are highlighted through looking at the Rubrick.


Quality Assurance
In our group, we have routinely given and received feedback throughout the course of the project. This has mainly been aimed towards ensuring focus is kept in regards to target audience considerations and game style, as well as technical feedback relating to 2D and 3D work. This feedback has then gone on to inform how we have progressed in the project. This Quality Assurance has a played a role in our pipeline, ensuring that all assets produced are of an acceptable quality and align with the target audience specifications. As this advice has been both supportive and critical, it has greatly informed the design choices I made when producing assets for the project.
This has played a strong role in our pipeline: after the creation of a 2D or 3D asset, we have given feedback. Simple as this feedback may have been, it still informs how we as group members progress. I personally found that, especially with regards to the 3D assets I struggled greatly with, this feedback helped greatly, informing where I should next work. Even a simple positive response from the group helped me in this regard.

This group-wide agreement of quality assurance proved to be a large aspect of our communication and a critical aspect of our development in the project. I myself routinely changed 3D and 2D assets due to such feedback, such as when I altered the Microplastic Enemy designs in one loading screen deign and added White Blood Cell units to another.

The following are several examples of feedback I have given and received across different elements of the project.



(In regards to me following a tutorial to create model outlines in Blender)

In game studios, this type of feedback and quality assurance is not only limited to pre-production and production. Quality assurance also extends to the playtesting of a game, ensuring that players remain engaged and interested in the experience that the game studio is attempting to create (8Bitplay 2023). These game testers, known as QA Testers, systematically test areas of the game in a repetitive fashion, attempting to find flaws in the design of the game that may beintrinsic to the game itself or incidental glitches that negatively impact the game experience (8Bitplay 2023). Though we do not currently have a working prototype of our game, and have mainly been using Unreal Engine to model how textural effects such as Cel Shading and Outlines may appear in our game, if we did create a working prototype then QA Testing could be used to demonstrate whether or not the disparate components of our game function as one, demonstrating where flaws in our balancing or game design lay.

Scheduling
Throughout the course of the project, we have used a rough schedule created by our Director, Seb, to show where we are and where we would like to be. This has helped us in keeping on track whilst not overtly defining our actions: we used it as an asset list, with a broad time frame applied so we could adequately manage our time. Elsewise, we worked on a largely week-by-week basis, looking over what we had produced and creating weekly SMART targets for session and study time goals.

 

References
8Bitplay, 2023. Comprehensive guide to the game development roles – QA game tester JOBS [online]. 8Bitplay. Available at: https://8bitplay.com/blog/ultimate-super-turbo-hd-guide-to-the-game-development-roles-qa-game-tester-jobs/ [Accessed 21 January 2025]

Comments

Popular posts from this blog

Industry Practice: Case Study - the art style of Don't Starve

ZBrush: Transpose, Masks and Subtools

3d toolkit: Zmodeler and Extract