The need for dedicated security people in your development studios

Scenario & Introduction

Friday, 4:00 PM - Your development team is working hard on a new update - it’s the last hour, of the last day of your sprint and you need to have something ready to launch. You have your scripters all focussed on delivering the new features you promised to your community - your QA testers are ready and eager to test everything before it gets rolled out to the public. Two of your programmers, Alice and Bob discuss a blocker they both have.

Alice: I can’t get this stupid turret to shoot!
Bob: What’s been stopping you?
Alice: The security framework is stopping me from allowing clients to fire non-character gun components. How am I meant to get this to work?
Bob: Yeah I had that problem yesterday too. I just added in a whitelist for my Event.
Alice: Thank you :slight_smile: I’ll try that!

Friday, 5:00 PM - Your developers finally finish up, and push their place to the QA internal development area. Your QA testers jump on and start trying to break anything and everything. A bunch of bugs previously uncaught by your unit tests surface and are quickly fixed. One of the updates added a new area that got users flagged by the anti noclip, so it was disabled in that zone.

Friday, 6:30 PM - Everyone is ready, all the bugs are fixed and the ads have already been pushed. It’s time to deploy to the live environment! :white_check_mark:


:mantelpiece_clock: Fast forward to 1 week later…


Friday, 2:00 AM - Revenue goes through the floor, and average playtime suddenly drops by over 30% in an hour. This flags up an alert and your team lead is woken up to check out the problem. They find a huge surge in exploiter reports of people noclipping and then killing people. The team lead is totally bewildered by how on earth this could have happened - they had put such effort into implementing a security framework which would block every single one of the attacks that are so clearly still happening.

What went wrong?

I’m sure you’ve all figured it out by now - but there were multiple weaknesses in that story that I’ve laid out to you that caused this lapse in security to happen, despite a strong security setup being worked on by some very competent people. And yes - I’m sure the developers in this story patched these new exploits pretty darn quickly; however they didn’t patch it quickly enough to mitigate the hundreds of USD lost because of the mistake happening in the first place.


The Solution: Put a security advocate on your team :female_detective:

A security advocate is an engineer who is given ultimate responsibility for the security of your game, and development lifecycle. They are the risk owner who is able to override decisions, and review code from a security perspective.

How? - Pick whoever in your team (or someone outside of your team) who is passionate about security. You’re looking for someone who can understand the code, and integrate in the team, but who looks at everything from a security mindset. This mindset is one where they can see through the complexities and focus on what attack vectors might be opened or closed by a particular decision. Then, empower them to make these decisions. Don’t let them be a voice shouting into the dark - listen to them, and if needed force the rest of the team to listen too. Remember that, whilst security increases costs slightly in the short term, it reaps many returns for itself in the long term by preventing the above scenario from ever happening.

Why? - It’s easier to demonstrate. We will add in Sarah as our security engineer to the above story, and see what happens next.


Friday, 4:00 PM - Your development team is working hard on a new update - it’s the last hour, of the last day of your sprint and you need to have something ready to launch. You have your scripters all focussed on delivering the new features you promised to your community - your QA testers are ready and eager to test everything before it gets rolled out to the public. Two of your programmers, Alice and Bob discuss a blocker they both have; Sarah is reading through code but also looking at their conversation.

Alice: I can’t get this stupid turret to shoot!
Bob: What’s been stopping you?
Alice: The security framework is stopping me from allowing clients to fire non-character gun components. How am I meant to get this to work?
Bob: Yeah I had that problem yesterday too. I just added in a whitelist for my Event.
Sarah: Hold on. What are the security implications of doing that? Could this allow exploiters to fire the Gun event on other people’s weapons?
Alice: Good point Sarah! I could make a new Event that handles just turrets; that way I can focus on the security of just that system! :slight_smile:
Bob: Oh dear - I’ll check out my code too. Sorry!

Friday, 5:15 PM - Your developers finally finish up, and push their place to the QA internal development area. Your QA testers jump on and start trying to break anything and everything. A bunch of bugs previously uncaught by your unit tests surface and are quickly fixed. One of the updates added a new area that got users flagged by the anti noclip, and Alice tried to disable it - however this was caught in code review by Sarah, who instead expanded the noclip zone to fit this new area. This pushed the release time back though, as it was a complicated job.

Friday, 7:30 PM - Everyone is ready, all the bugs are fixed and the ads have already been pushed. It’s time to deploy to the live environment! :white_check_mark:


This time, though, there is no security incident. Sarah has caught all of the issues!

Hopefully this inspires you to consider doing the same in your game - I can assure you it’s worth it :slight_smile:

6 Likes

I think there’s a general issue with developers not considering security… it happens across a lot of programming professions which is why you need dedicated infosec departments. I think this is a great solution; giving someone ultimate responsibility and having someone who needs to answer for security flaws. But perhaps giving every team member the responsibility and importantly holding them accountable for security issues could work well too, systems should be designed with security in mind.

In your example I noticed you talked about a ‘security framework’ which I think could cause issues in itself. It seperates out security from everything else, and implies that you don’t need to think about the security implications when designing a system, because the security framework already handles it. I’m an advocate for not making security easier because it makes you complacent if you’re not always having to consider the security implications.