@@ -46,11 +46,10 @@ Additional noise is added to the outcome of each case via $\epsilon_\outcome$, w
Our experimentation involves two categories of decision makers: (i) the set of decision makers \humanset, the decisions of which are reflected in a dataset, and (ii) the decision maker \machine, whose performance is to be evaluated on the log of cases decided by \humanset.
%
We describe both of them below.\acomment{I thought we wanted that evaluated and in the simulated data are the same to the largest extent? See Fig. 5 for example, which indicates similarity between H and M.}
We describe both of them below.
\mpara{Decisions by \humanset}\newline
%
\acomment{This seems to describe the independent decision make? Not all H are like this.}
Among cases that receive a positive decision, the probability to have a positive or negative outcome is higher or lower depending on the quantity below (see Equation~\ref{eq:defendantmodel}), to which we refer as the `{\it risk score}' of each case
@@ -60,7 +59,7 @@ Lower values indicate that a negative outcome is more likely.
%
%
We assume that the decision makers are well-informed and rational: their decisions reflect the probability that a case would have a positive or negative outcome. \acomment{We do not make this assumption! Our method works with random decision makers in the data! That is one of the main points. See related work 3rd paragraph.}
We assume that the decision makers are well-informed and rational: their decisions reflect the probability that a case would have a positive or negative outcome.
%
The remaining parameter $\alpha_\judgeValue$ is set so as to conform with a pre-determined level of leniency \leniency.