Keeping an online learning community safe, positive and creative.
Company
Makers Empire
Time
July 2020 – April, 2021
My Role
Research, process design, prototyping and testing, final UI handover
Makers Empire is the world’s most fun and easy to use 3D design program. It is used by elementary/primary-aged students to create over 100,000 designs a day.
As Makers Empire’s user base was growing it was becoming increasingly difficult to moderate the content being created and shared.
Additional considerations included:
To bring myself and the project team up to speed, I started by reading up on the applicable standards and familiarising myself with the methods and best practices employed by other platforms that allow children to create and share content.
I interviewed teachers so we could better understand their expectations of moderation in digital products and to gauge how big a problem they perceive inappropriate content to be.
I asked teachers to share strategies they use to mitigate and correct inappropriate behavior in a classroom setting, they deal with it all day after all.
We looked at the data to get a big-picture understanding of what types of designs were being reported. We used Smartlook to observe the behaviour of users who were reporting designs.
We realised that while inappropriate content is a real and ongoing concern, the larger issue was arguably the misuse of our built-in reporting tools. In other words, the reporters were causing more problems than the reported users.
In addition to those who were reporting designs that they seemed to believe were genuinely inappropriate, we identified three troublesome behaviours amongst reporters: Tall Poppy Syndrome, Venting and Tribalism.
We noticed that the more popular designs from the “Hot Gallery” were reported with disproportionate frequency given the more popular designs are far less likely to actually be overtly inappropriate – like almost never.
These Tall Poppy reports almost exclusively came from less accomplished users who had no direct link to the more accomplished user they were reporting, i.e. they were not from the same school account and/or were not ‘followers’ of each other in app.
We also noticed there were common themes in the frivolous reports. Reporters were using the report function to express general complaints and frustrations more than to help us identify genuinely problematic content. Common grievances included claims that popular users were copying ideas from other users, designs were too expensive and/or they just weren’t good.
"While the Bad Word filter and Sentiment Analysis helped prevent the comments section from becoming as toxic as a Facebook or Twitter tit-for-tat, the underlying dynamic was disturbingly similar."
We noticed that in some cases a design or individual would be reported by multiple users who did have some real world (same school) and/or online (followers) connection. The complaints followed the same ‘venting’ themes; not good, copying, too expensive.
In some instances these reports to us would spill into complaints and counter-complaints, accusations and counter-accusations in the design comments section which is visible to all users. In some cases, a third party would jump to the defense of the reported individual. While the Bad Word filter and Sentiment Analysis helped prevent the comments section from becoming as toxic as a Facebook or Twitter tit-for-tat, the underlying dynamic was disturbingly similar.
That’s not to say all reports were frivolous or vindictive. Some users were still making inappropriate content which needed to be addressed.
Not all inappropriate content in Makers Empire is created equal. Firstly we have to deal with both text and imagery.
A rude word “F*** YOU!!!” is much easier to detect than polite bullying, “Dear Sir/Madam, it is my sincere belief that the world would be a better place if you were no longer in it”, for example.
Detecting inappropriate imagery is much harder again. The types of 3D models elementary school-aged users can create are almost never realistic enough to register on a Safe Search (or similar API), but to a human the intention is very clear. All you need is two spheres and a column, you can imagine… Rather than focus on what the users were making, I hypothesised that focusing on why they create inappropriate content might be more effective.
We identified three common reasons why we believed young users might be creating inappropriate content: Attention Seeking, Testing Boundaries and Genuine Mistakes. These probably sound very familiar to parents, teachers and anyone with a cursory understanding of childhood development.
This group know that they’re being offensive and that is the point. They likely want to be recognized, in the first instance by their peers or someone they’re trying to impress, and at a more subconscious level, perhaps by an authority figure who will give them attention.
We discovered that the majority of overtly inappropriate content was created by users who were not associated with a school account.
"by trying to prevent it [bad behaviour] through feedback, you can actually end up training users how to abuse your system"
Users are probably trying to see what they can get away with, more than to offend anyone. They are exhibiting a degree of curiosity and experimentation, which are fantastic traits; but they are also exhibiting poor judgement.
The difficulty with this behaviour is that by trying to prevent it through feedback, you can actually end up training users how to abuse your system. A classic example is obfuscating bad words. Kids will figure out a way to write bad words somehow and if you’re giving them instant feedback as they type, it’ll make it easier for them to figure out what four-letter combinations plus spaces and symbols will trick your regex.
There are wide, murky grey areas in what is considered inappropriate and it varies amongst cultures, communities and families. This is even further variation in what is considered age-appropriate within those contexts.
For example, Makers Empire’s Acceptable Content Policy states, “Unacceptable content includes… Weapons, including guns and warfare”. A lot of kids like playing with toy guns. Some of our users may be growing up in households where real guns are a part of life. If they design a character holding a gun, these users are probably not trying to cause offence, but they do need to learn what is acceptable in this context.
How might we help reporters better understand what is inappropriate in the context of Makers Empire’s community standards (objective) vs. things they just don’t like (subjective)?
How might we allow users to feel validated when they find something that bothers them, but isn’t actually inappropriate?
How might we help users develop more empathy and tolerance for things and people they don’t like, understand and/or agree with?
How might we educate low level/first time offenders about what is appropriate in our community?
How might we enhance the ways we protect our community from users who are intentionally creating inappropriate content?
Yes we (probably) can.
There is no silver bullet that will make all users behave in the manner we would like. We worked on our tech and processes behind the scenes, and took the opportunity to try some pretty big UX changes up front.
We built, tested and iterated upon our solutions over almost 10 months.
We reviewed the Bad Word filter we were using and, counterintuitively, tweaked our Sentiment Analysis settings to be a little more lenient.
We also started recording the IP and device ID of reported users so we can better identify and preemptively block new accounts created by serial offenders.
To allow users to feel validated when they want to complain about a particular design or another user, we modified the reporting process and asked users to explain why they are reporting a design:
1. It’s offensive or not good for kids ( issues that we do need to know about)
2. I don’t like this design (things we probably don’t really need to know about):
3. Other, tell us why you are reporting this design
When a user reports that they just “don’t like this design” we try to turn it into a positive intervention. We explain in simple language that some users are still developing their design skills, some people have different tastes, it is possible for different people to have similar ideas without copying one another, value is relative and everyone has the right to determine what their efforts are worth etc.
We decided to make the ability to share content publicly and interact with the Makers Empire community an earned privilege, rather than a default right. This includes sharing designs to the public gallery and being able to comment on other user’s work.
To unlock these privileges, new or returning users must become a Makers Empire Member. To become a Member they must either: A. Be added or approved by a teacher on a school account, or B. verify a valid email address.
Members must also agree to a pledge, the “Maker Promise”:
Do you promise to follow these simple rules in Makers Empire?
• No violent designs. No weapons.
• No bad language. No bullying.
• Be creative. Be positive. Be respectful.
The Maker Promise outlines the essence of Makers Empire’s community standards, in clear, kid-friendly language. We tested the wording with elementary-aged students and teachers to ensure the promise is accessible and understandable.
Sometimes a comment does manage to get through our Bad Word filter and Sentiment Analysis, that while not overtly offensive or maybe not even negative, might be unhelpful, off-topic or maybe it just doesn’t suit the creative, positive tone of the community we’re trying to foster.
In this case users can still help moderate the comment by ‘unliking’ it (a broken heart symbol). One unlike will move the comment down the list, a second unlike will move it further down the list and a third unlike will ‘shadow ban’ the comment.
If a particularly offensive moderated user tries to create a new account on the same device or same IP we will automatically shadow-ban the new account too.
In the rare event that a moderated user is associated with a school account, we will contact the school and work with their teachers to develop a suitable intervention.
In an initial rework of the reporting UI, we thought the existing report icon, a red siren, was perhaps confusing users. We changed it to a police officer to represent a familiar authority figure who most children would associate with safety. This resulted in a sudden increase in reports, of which almost all were frivolous. We had made the report button too attractive. We swapped it out for a more generic exclamation mark ‘alert / something’s wrong’ icon and the percentage of frivolous to valid reports dropped back down to a more manageable level.