Skip to content
Snippets Groups Projects
Commit c3974484 authored by Antti Hyttinen's avatar Antti Hyttinen
Browse files

Artega to conclusions.

parent 7591df36
No related branches found
No related tags found
No related merge requests found
...@@ -7,6 +7,71 @@ ...@@ -7,6 +7,71 @@
%% Saved with string encoding Unicode (UTF-8) %% Saved with string encoding Unicode (UTF-8)
@inproceedings{DBLP:conf/icml/JohanssonSS16,
author = {Fredrik D. Johansson and
Uri Shalit and
David A. Sontag},
editor = {Maria{-}Florina Balcan and
Kilian Q. Weinberger},
title = {Learning Representations for Counterfactual Inference},
booktitle = {Proceedings of the 33nd International Conference on Machine Learning,
{ICML} 2016, New York City, NY, USA, June 19-24, 2016},
series = {{JMLR} Workshop and Conference Proceedings},
volume = {48},
pages = {3020--3029},
publisher = {JMLR.org},
year = {2016},
url = {http://proceedings.mlr.press/v48/johansson16.html},
timestamp = {Fri, 15 Nov 2019 17:16:09 +0100},
biburl = {https://dblp.org/rec/bib/conf/icml/JohanssonSS16},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{DBLP:journals/jmlr/BottouPCCCPRSS13,
author = {L{\'{e}}on Bottou and
Jonas Peters and
Joaquin Qui{\~{n}}onero Candela and
Denis Xavier Charles and
Max Chickering and
Elon Portugaly and
Dipankar Ray and
Patrice Y. Simard and
Ed Snelson},
title = {Counterfactual reasoning and learning systems: the example of computational
advertising},
journal = {J. Mach. Learn. Res.},
volume = {14},
number = {1},
pages = {3207--3260},
year = {2013},
url = {http://dl.acm.org/citation.cfm?id=2567766},
timestamp = {Wed, 10 Jul 2019 15:27:56 +0200},
biburl = {https://dblp.org/rec/bib/journals/jmlr/BottouPCCCPRSS13},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/icml/NabiMS19,
author = {Razieh Nabi and
Daniel Malinsky and
Ilya Shpitser},
editor = {Kamalika Chaudhuri and
Ruslan Salakhutdinov},
title = {Learning Optimal Fair Policies},
booktitle = {Proceedings of the 36th International Conference on Machine Learning,
{ICML} 2019, 9-15 June 2019, Long Beach, California, {USA}},
series = {Proceedings of Machine Learning Research},
volume = {97},
pages = {4674--4682},
publisher = {{PMLR}},
year = {2019},
url = {http://proceedings.mlr.press/v97/nabi19a.html},
timestamp = {Tue, 11 Jun 2019 15:37:38 +0200},
biburl = {https://dblp.org/rec/bib/conf/icml/NabiMS19},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{rosenbaum1983central, @article{rosenbaum1983central,
title={The central role of the propensity score in observational studies for causal effects}, title={The central role of the propensity score in observational studies for causal effects},
author={Rosenbaum, Paul R and Rubin, Donald B}, author={Rosenbaum, Paul R and Rubin, Donald B},
......
...@@ -5,9 +5,9 @@ ...@@ -5,9 +5,9 @@
In this paper we considered evaluation of (automatic) decision makers, which is vitally needed for the current aims of replacing human decision making with different kinds of automatic decision making procedures. The challenge in this is that the evaluation often needs to be based on data where present decisions imply selective labeling and missing data, thus biasing any standard statistical data analysis results. We showed that with proper causal modeling, automatic decision makers can be evaluated even based on such selectively labeled data. Contrary to the previous methods, our proposed approach allows for more accurate evaluations, with less variation, also in settings that evaluation was not possible before. In this paper we considered evaluation of (automatic) decision makers, which is vitally needed for the current aims of replacing human decision making with different kinds of automatic decision making procedures. The challenge in this is that the evaluation often needs to be based on data where present decisions imply selective labeling and missing data, thus biasing any standard statistical data analysis results. We showed that with proper causal modeling, automatic decision makers can be evaluated even based on such selectively labeled data. Contrary to the previous methods, our proposed approach allows for more accurate evaluations, with less variation, also in settings that evaluation was not possible before.
In the future we will examine further generalizing the setting and modeling assumption: more intricate differences in decision maker's behaviour could be modeled e.g. by hiearchical Bayesian modeling. In the future we will examine further generalizing the setting and modeling assumption: more intricate differences in decision maker's behaviour could be modeled e.g. by hierarchical Bayesian modeling.
% %
Since our approach predicts outcomes based on decision made by educated decision makers, it is an open question, whether this information can be used also when learning the statistical models the automatic decision makers are ultimately based on. Since our approach predicts outcomes based on decision made by educated decision makers, it is still unclear how much this benefits the estimation the statistical models the automatic decision makers are ultimately based on~\cite{dearteaga2018learning}.
% %
We believe such approaches will allow for better evaluations in new application fields, ensuring the accuracy and fairness of automatic decision making procedures that can be then adopted in the society. We believe such approaches will allow for better evaluations in new application fields, ensuring the accuracy and fairness of automatic decision making procedures that can be then adopted in the society.
......
...@@ -4,7 +4,22 @@ ...@@ -4,7 +4,22 @@
\section{Related work} \section{Related work}
\label{sec:related} \label{sec:related}
\acomment{Very preliminary.}
De-Arteaga et al. also note the possibility of using decision in the data to correct for selective labels assuming expert consistency. They directly impute decisions as outcomes and consider learning automatic decision makers~\cite{dearteaga2018learning}. In contrast, our approach on decision maker evaluation is based on a rigorous probabilistic model accounting for different leniencies and unobservables. Furthermore, our approach gives accurate results even with random decision makers that clearly violate the expert consistency assumption. \acomment{We should refer to Deartega somewhere early on, they have made the same discovery as we put presented it poorly.}
\subsection{Counterfactuals}
Recent research has shown the value of counterfactual reasoning in similar setting as this paper, for fairness of decision making, and applications in online advertising~\cite{DBLP:journals/jmlr/BottouPCCCPRSS13,DBLP:conf/icml/Kusner0LS19,DBLP:conf/icml/NabiMS19,DBLP:conf/icml/JohanssonSS16,pearl2000}.
\subsection{Imputation}
\subsection{Older}
% %
Although contraction is computationally very simple/efficient and estimates the true failure rate well, it has some limitations. Although contraction is computationally very simple/efficient and estimates the true failure rate well, it has some limitations.
...@@ -29,7 +44,7 @@ The disadvantage of that is that it may delay the presentation of the main contr ...@@ -29,7 +44,7 @@ The disadvantage of that is that it may delay the presentation of the main contr
On the other hand, we should make sure that competing methods like \citet{lakkaraju2017selective} are sufficiently described before the appear in experiments. On the other hand, we should make sure that competing methods like \citet{lakkaraju2017selective} are sufficiently described before the appear in experiments.
} }
Discuss this: \cite{DBLP:conf/icml/Kusner0LS19} Discuss this:
\begin{itemize} \begin{itemize}
\item Lakkaraju and contraction. \cite{lakkaraju2017selective} \item Lakkaraju and contraction. \cite{lakkaraju2017selective}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment