There are various forms of assessments, one of them is peer-assements, where users have to score their peers on various dimensions.
Our assessments avoid rating users on a single scale. Instead, by forcing a single choice among multiple dimensions scores between different users balance out: a low score on one dimension means a higher score on a different dimension. This ensures it is impossible to 'ace' an assessment and causes discussions about the assessments to focus on the differences between users and the reasons behind the differences.
By combining statistics, large databases with existing results and automated graphing we provide assessment systems that ensure user scores are representative and normalized for their target audiences.