Well, attacking the problem at the root is fairly
straightforward, at least in concept - it would just mean actually curating safe, respectful online chats, instead of the toxicity that gaming and the wider internet have unfortunately become known for. Many people believe this is impossible; but it’s
worth noting that, as a society, we do it all the time. If someone shows up to a playground in real life and begins loudly talking about white supremacy, or requesting photos of genitalia (even from adults, rather than kids), we have a number of ways we correct their behavior, ranging from a gentle reprimand up to and including police intervention. This doesn’t make playgrounds 100%, absolutely, unquestionably safe… but it does prevent kids from picking up questionable ideologies or offensive slurs as a matter of course in the way they often do online. How do we solve this issue on playgrounds? It’s not by shadowing our kids, listening to every word they say and recording every move they make. But it is by having authority figures - parents, schoolteachers, coaches, etc - nearby enough that they can notice from afar when trouble seems to be starting to break out. Raised voices, an unexpected person joining the group, or even things suddenly going too quiet - these can all be clues to those nearby that something has gone wrong, and the kids need someone to step in for their own sake. The analogous solution in the online world is known
as “proactive moderation.” When done well, proactive moderation enables platforms to act just like the parents keeping an eye on the playground. The platform doesn’t listen to every little detail, doesn’t follow each kid around or even necessarily need to know which kid is which; but they do notice the telltale signs of trouble breaking out, and can then step closer, start actually paying attention to the details, and intervene in whatever way is appropriate. In text chat, there are a number of existing solutions
which offer proactive insights to studios, and many which are able to do so while maintaining due respect for the privacy of the children involved. This problem has
historically been more difficult in voice chat, resulting in many studios like Epic choosing not to moderate voice chat (often in the name of preserving player privacy), though it’s become possible in recent years. A robust proactive voice moderation system – at
“In the physical world, we constantly walk a line between respecting the privacy of our kids and ensuring their safety, and while online platforms have fallen behind, that
doesn’t mean they can’t live up to this ideal”
least one built with privacy in mind – doesn’t listen to what’s being said in conversations; it starts out monitoring from afar, looking for simple things like heated emotions, uncharacteristic behavior, or participants who aren’t supposed to be there. It’s only after signs of trouble are recognised that it will ‘step closer’ and begin listening more closely to better understand the problem. Even then, the identities of kids and players won’t be recorded, nor will their behavior be logged long-term. Just as a good samaritan might flag to a teacher that “the kid in the orange shirt looks like he needs help”, a proactive voice moderation tool will flag to online platforms “there’s someone over here you should take a look at” - ensuring the platforms can intervene and protect their users, without putting user privacy at risk. This gives us the
opportunity to have our cake and eat it too - to enable both player safety and player privacy, despite the apparent tension between the concepts. In the physical world, we constantly walk a line between respecting the privacy of our kids and ensuring their
safety, and while online platforms have fallen behind, that doesn’t mean they can’t live up to this ideal. And, as this recent case shows, online platforms are rapidly feeling the pressure to shoot for this middle ground - not forsaking safety in the name of privacy, nor vice versa, but balancing both together, and doing it fast. (Part of the FTC’s focus on Epic came from their frustration that Epic took too long to deploy sufficient safety features.) At Modulate, we believe in an internet experience
that’s safe, private, authentic, and fun. For too long, platforms have assumed we can’t have them all, and left users in the lurch. There IS a way forward, but consumers and regulators won’t be satisfied until we put in the work and deliver experiences that check all the boxes together.
January 2023 MCV/DEVELOP | 15
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60