Skip to content
Snippets Groups Projects
Commit e4f3a443 authored by Antti Hyttinen's avatar Antti Hyttinen
Browse files

Few comments.

parent 9d3a6c4c
No related branches found
No related tags found
No related merge requests found
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
\usepackage{footnote} % show footnotes in tables \usepackage{footnote} % show footnotes in tables
\makesavenoteenv{table} \makesavenoteenv{table}
\newcommand{\antti}[1]{{{\color{orange} [AH: #1]}}}
\newcommand{\ourtitle}{A Causal Approach for Selective Labels} \newcommand{\ourtitle}{A Causal Approach for Selective Labels}
...@@ -168,6 +168,8 @@ We wish to calculate the probability of undesired outcome (\outcome = 0) at a fi ...@@ -168,6 +168,8 @@ We wish to calculate the probability of undesired outcome (\outcome = 0) at a fi
& = \sum_\featuresValue \prob{\outcome = 0 | \decision = 1, \features = \featuresValue} \prob{\decision = 1 | \leniency = \leniencyValue, \features = \featuresValue} \prob{\features = \featuresValue} & = \sum_\featuresValue \prob{\outcome = 0 | \decision = 1, \features = \featuresValue} \prob{\decision = 1 | \leniency = \leniencyValue, \features = \featuresValue} \prob{\features = \featuresValue}
\end{align*} \end{align*}
\antti{Here one can drop do even at the first line according to do-calculus rule 2, i.e. $P(Y=0|do(R=r))=P(Y=0|R=r)$. However, do-calculus formulas should be computed by first learning a graphical model and then computing the marginals using the graphical model. This gives more accurate result. Michael's complicated formula essentially does this, including forcing $P(Y=0|T=0,X)=0$ (the model supports context-specific independence $Y \perp X | T=0$.)}
Expanding the above derivation for model \score{\featuresValue} learned from the data Expanding the above derivation for model \score{\featuresValue} learned from the data
\[ \[
\score{\featuresValue} = \prob{\outcome = 0 | \features = \featuresValue, \decision = 1}, \score{\featuresValue} = \prob{\outcome = 0 | \features = \featuresValue, \decision = 1},
...@@ -221,7 +223,7 @@ random variables so that ...@@ -221,7 +223,7 @@ random variables so that
\prob{\outcome = 0| \features = \featuresValue} = \dfrac{1}{1+\exp\{-\featuresValue\}}. \prob{\outcome = 0| \features = \featuresValue} = \dfrac{1}{1+\exp\{-\featuresValue\}}.
\] \]
The decision variable $\decision$ was set to 0 if the probability $\prob{\outcome = 0| \features = \featuresValue}$ resided in the top $(1-\leniencyValue)\cdot 100 \%$ of the subjects appointed for that judge. The decision variable $\decision$ was set to 0 if the probability $\prob{\outcome = 0| \features = \featuresValue}$ resided in the top $(1-\leniencyValue)\cdot 100 \%$ of the subjects appointed for that judge. \antti{How was the final Y determined? I assume $Y=1$ if $T=0$, if $T=1$ $Y$ was randomly sampled from $\prob{\outcome| \features = \featuresValue}$ above? Delete this comment when handled.}
Results for estimating the causal quantity $\prob{\outcome = 0 | \doop{\leniency = \leniencyValue}}$ with various levels of leniency $\leniencyValue$ under this model are presented in Figure \ref{fig:without_unobservables}. Results for estimating the causal quantity $\prob{\outcome = 0 | \doop{\leniency = \leniencyValue}}$ with various levels of leniency $\leniencyValue$ under this model are presented in Figure \ref{fig:without_unobservables}.
\begin{figure} \begin{figure}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment