Skip to content
Snippets Groups Projects
Commit 3a309473 authored by Antti Hyttinen's avatar Antti Hyttinen
Browse files

McCanddless to places.

parent c3974484
No related branches found
No related tags found
No related merge requests found
......@@ -74,9 +74,9 @@ Our approach for evaluating decision of $\machine$ on cases where $\human$ made
Recall from Section~\ref{sec:setting} that Figure~\ref{fig:causalmodel} provides the structure of causal relationships for quantities of interest.
We use the following causal model over this structure, building on what is used by Lakkaraju et al.~\cite{lakkaraju2017selective}.
We use the following causal model over this structure, building on what is used by Lakkaraju et al.~\cite{lakkaraju2017selective} and others~\cite{mccandless2007bayesian}.
First, we assume the observed and unobserved feature vectors $\obsFeatures,\unobservable$ representing risks can be condensed into one dimension risk factors, for example by using propensity scores~\cite{rosenbaum1983central,austin2011introduction}. Motivated by the central limit theorem we model these risk factors with Gaussian distributions. Furthermore, since $\unobservable$ is unobserved we can assume its variance to be 1 without loss of generality, thus $\unobservable \sim N(0,1)$(Any deviation from this can be achieved by adjusting intercepts and coefficients in the following).
First, we assume the observed and unobserved feature vectors $\obsFeatures,\unobservable$ representing risks can be condensed into one dimension risk factors~\cite{mccandless2007bayesian}, for example by using propensity scores~\cite{rosenbaum1983central,austin2011introduction}. Motivated by the central limit theorem we model these risk factors with Gaussian distributions. Furthermore, since $\unobservable$ is unobserved we can assume its variance to be 1 without loss of generality, thus $\unobservable \sim N(0,1)$(Any deviation from this can be achieved by adjusting intercepts and coefficients in the following).
According to the selective labels setting we have $Y=1$ whenever $T=0$. When $T=1$, subject behaviour is subject to a logistic regression model over the features $\obsFeatures$ and $\unobservable$:
......
......@@ -5,13 +5,15 @@
\label{sec:related}
De-Arteaga et al. also note the possibility of using decision in the data to correct for selective labels assuming expert consistency. They directly impute decisions as outcomes and consider learning automatic decision makers~\cite{dearteaga2018learning}. In contrast, our approach on decision maker evaluation is based on a rigorous probabilistic model accounting for different leniencies and unobservables. Furthermore, our approach gives accurate results even with random decision makers that clearly violate the expert consistency assumption. \acomment{We should refer to Deartega somewhere early on, they have made the same discovery as we put presented it poorly.}
De-Arteaga et al. also note the possibility of using decision in the data to correct for selective labels assuming expert consistency~\cite{dearteaga2018learning}. They directly impute decisions as outcomes and consider learning automatic decision makers. In contrast, our approach on decision maker evaluation is based on a rigorous probabilistic model accounting for different leniencies and unobservables. Furthermore, our approach gives accurate results even with random decision makers that clearly violate the expert consistency assumption. \acomment{We should refer to Deartega somewhere early on, they have made the same discovery as we put presented it poorly.}
\subsection{Counterfactuals}
Recent research has shown the value of counterfactual reasoning in similar setting as this paper, for fairness of decision making, and applications in online advertising~\cite{DBLP:journals/jmlr/BottouPCCCPRSS13,DBLP:conf/icml/Kusner0LS19,DBLP:conf/icml/NabiMS19,DBLP:conf/icml/JohanssonSS16,pearl2000}.
Mccandless perform Bayesian sensitivity analysis on the priors for a similar model as here employing logistics regression~\cite{mccandless2007bayesian}.
\subsection{Imputation}
......@@ -92,18 +94,18 @@ Discuss this:
\item Additionally they don't consider the effect of having multiple decision-makers with differing levels of leniency. (in intro)
\end{itemize}
\item Discussions of latent confounders in multiple contexts.
\item Classical Bayesian sensitivity analysis of \citet{mccandless2007bayesian}
\begin{itemize}
\item Task: bayesian sensitivity analysis of the effect of an unmeasured binary confounder on a binary response with a binary exposure variable and other masured confounders.
\item Experiments: The writers consider the effect of different priors on the coefficient estimates logistic regression in a beta blocker therapy study.
\item The authors carry out a more classical analysis of the effect of priors on the estimates. There are similarities, but there are also a lot of differences, most notably lack of selective labeling and a different model structure where the observed independent variables affect both the unobserved confounder and the result. In their model the unobserved only affects the outcome.
\end{itemize}
%\item Classical Bayesian sensitivity analysis of \citet{mccandless2007bayesian}
% \begin{itemize}
% \item Task: Bayesian sensitivity analysis of the effect of an unmeasured binary confounder on a binary response with a binary exposure variable and other masured confounders.
% \item Experiments: The writers consider the effect of different priors on the coefficient estimates logistic regression in a beta blocker therapy study.
% \item The authors carry out a more classical analysis of the effect of priors on the estimates. There are similarities, but there are also a lot of differences, most notably lack of selective labeling and a different model structure where the observed independent variables affect both the unobserved confounder and the result. In their model the unobserved only affects the outcome.
% \end{itemize}
\item Imputation methods and other approaches to selective labels
\item Data augmentation approach by \citet{dearteaga2018learning}
\begin{itemize}
\item Task: Training predictive models to perform better under selective labeling utilizing the homogeneity of human decision makers. They base their approach on the notion that if decision makers consistently make a negative decision to some subjects they must be dangerous.
\item Contributions: They propose a method for augmenting the selectively labeled data with observations that have a selection probability under some threshold $\epsilon$. I.e. For observations with $\decision=0$, predict $\prob{\decision~|~\obsFeatures}$, augment data so that $\outcome = 0$ when $\prob{\decision~|~\obsFeatures} < \epsilon$ instead of having missing values.
\item In contrast: The writers assume no unobservable confounders affecting the outcome and focus only on the similarity of the assigned decisions given the features. Writers do not address the issue of leniency in their analysis.
\end{itemize}
%\item Data augmentation approach by \citet{dearteaga2018learning}
% \begin{itemize}
% \item Task: Training predictive models to perform better under selective labeling utilizing the homogeneity of human decision makers. They base their approach on the notion that if decision makers consistently make a negative decision to some subjects they must be dangerous.
% \item Contributions: They propose a method for augmenting the selectively labeled data with observations that have a selection probability under some threshold $\epsilon$. I.e. For observations with $\decision=0$, predict $\prob{\decision~|~\obsFeatures}$, augment data so that $\outcome = 0$ when $\prob{\decision~|~\obsFeatures} < \epsilon$ instead of having missing values.
% \item In contrast: The writers assume no unobservable confounders affecting the outcome and focus only on the similarity of the assigned decisions given the features. Writers do not address the issue of leniency in their analysis.
% \end{itemize}
\item Doubly robust methods, propensity score and other matching techniques
\end{itemize}
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment