From df7686b4f7b493eb6e10fa825d722c6c882af78b Mon Sep 17 00:00:00 2001 From: Antti Hyttinen <ajhyttin@gmail.com> Date: Tue, 6 Aug 2019 14:19:52 +0300 Subject: [PATCH] ... --- paper/sl.tex | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/paper/sl.tex b/paper/sl.tex index c592a92..15d7edf 100755 --- a/paper/sl.tex +++ b/paper/sl.tex @@ -167,7 +167,7 @@ Therefore, the aim is here to give an estimate of the FR at any given AR for any %The "eventual goal" is to create such an evaluator module that it can outperform (have a lower failure on all levels of acceptance rate) the deciders in the data generating process. The problem is of course comparing the performance of the deciders. We try to address that. -\subsection{Modeling the Situation} +\subsection{Causal Modeling} \begin{figure} \begin{tikzpicture}[->,>=stealth',node distance=1.5cm, semithick] @@ -199,7 +199,7 @@ The outcome $Y$ is affected by the observed background factors $X$, unobserved We use a propensity score framework to model $X$ and $Z$: they are assumed continuous Gaussian variables, with the interpretation that they represent summarized risk factors such that higher values denote higher risk for a negative outcome ($Y=0$). Hence the Gaussianity assumption here is motivated by the central limit theorem. - +\setcounter{section}{1} \section{ Framework ( by Riku)} -- GitLab