All users in all interactions of the system must be verified humans.
At the moment there are unverified users of the system that are allowed to perform functions in the system. This is contributing to an unnecessarily toxic environment as there is no social accountability in "non-human behaviour."
System roles like challengers and jurors must be verified humans as well in order to encourage better decision making and more human-like responses. An example would be in error in submissions.
Full transparency gives the system further options in the future. We should be able to reward the most human-like behaviours with gratitude tokens/streams etc and put social pressure on those that are not acting within the essence of what the system is trying to do.
We have toxic behaviours on the system at the moment that are within the letter of the rules but against the spirit of the system.
Providing social accountability within the system is something that leverages the evolutionary strengths of humans. This is something that is difficult to do/mimic by robots to fosters a direction of long term system resilience.