\node[state] (R) [ellipse] at (0,1.0){\hspace*{0mm}$\judge$: {\small Decision maker index }\hspace*{-4mm}};
\node[state] (R) [ellipse] at (0,0.0){\hspace*{-4mm}$\judge$: {\footnotesize Decision maker}\hspace*{-4mm}};
\node[state] (X) [ellipse] at (4.5,1) {\hspace*{0mm}$\obsFeatures$: {\small Observed features}\hspace*{-3mm}};
\node[state] (X) [ellipse] at (3.5,1.5) {\hspace*{-4mm}$\obsFeatures$: {\footnotesize Observed features}\hspace*{-4mm}};
\node[state] (T) [ellipse] at (2,0) {\hspace*{0mm}$\decision$: {\small Decision }\hspace*{-2mm}};
\node[state] (T) [ellipse] at (3.5,0) {\hspace*{0mm}$\decision$: {\footnotesize Decision }\hspace*{-0mm}};
\node[state] (Z) [rectangle] at (4.5,-1) {$\unobservable$: {\small Unobserved features}};
\node[state] (Z) [rectangle] at (7,1.5) {$\unobservable$: {\footnotesize Unobserved features}};
\node[state] (Y) [ellipse] at (7,0) {\hspace*{0mm}$\outcome$: {\small Outcome}\hspace*{-2mm}};
\node[state] (Y) [ellipse] at (7,0) {\hspace*{0mm}$\outcome$: {\footnotesize Outcome}\hspace*{-0mm}};
\path (R) edge (T)
\path (R) edge (T)
(X) edge (T)
(X) edge (T)
...
@@ -82,7 +82,7 @@ unobserved features \unobservable can be modeled as a (continuous) one-dimension
...
@@ -82,7 +82,7 @@ unobserved features \unobservable can be modeled as a (continuous) one-dimension
Motivated by the central limit theorem, we use a Gaussian distribution for it, and
Motivated by the central limit theorem, we use a Gaussian distribution for it, and
since $\unobservable$ is unobserved we can assume its variance to be 1 without loss of generality, thus $\unobservable\sim N(0,1)$. %, for example by using propensity scores~\cite{}.
since $\unobservable$ is unobserved we can assume its variance to be 1 without loss of generality, thus $\unobservable\sim N(0,1)$. %, for example by using propensity scores~\cite{}.
%
%
Moreover, we are also going to present our modeling approach for the case of a single observed feature \obsFeatures -- this is done only for simplicity of presentation, as it is straightforward to extend the model to the case of multiple features \obsFeatures.
Moreover, for simplicity of presentation, we consider the case of a single observed feature \obsFeatures -- it is straightforward to extend the model to the case of multiple features \obsFeatures.
%WELL ACTUALLY REASONING IS NOT THIS BUT THE GENERAL POINT
%WELL ACTUALLY REASONING IS NOT THIS BUT THE GENERAL POINT
%ABOUT PROPENSITY SCORE; IT IS BETTER CONFIDENCE ALL OBSERVATIONS TO A SINGLE VARIABLE,
%ABOUT PROPENSITY SCORE; IT IS BETTER CONFIDENCE ALL OBSERVATIONS TO A SINGLE VARIABLE,
% we do this only in the compas section and it is not advertised there
% we do this only in the compas section and it is not advertised there