Skip to content
Snippets Groups Projects
sl.tex 44.2 KiB
Newer Older
  • Learn to ignore specific revisions
  • \documentclass[sigconf,anonymous]{acmart}
    % \documentclass[sigconf]{acmart}
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    \usepackage{tikz}
    \usepackage{tikz-cd}
    \usetikzlibrary{arrows,automata, positioning}
    
    
    % Packages
    \usepackage{type1cm}     % type1 computer modern font
    \usepackage{graphicx}     % advanced figures
    \usepackage{xspace}     % fix space in macros
    \usepackage{balance}     % to better equalize the last page
    \usepackage{multirow}     % multi rows for tables
    \usepackage[font={bf}, tableposition=top]{caption}     % captions on top for tables
    \usepackage{bold-extra}     % bold + {small capital, italic}
    \usepackage{siunitx}          % \num for decimal grouping
    \usepackage[vlined,linesnumbered,ruled,noend]{algorithm2e}     % algorithms
    \usepackage{booktabs}     % nicer tables
    %\usepackage[hyphens]{url}     % handle long urls
    %\usepackage[bookmarks, pdftex, colorlinks=false]{hyperref}     % clickable references
    %\usepackage[square,numbers]{natbib}     % better references
    \usepackage{microtype}    % compress text
    \usepackage{units}     % nicer slanted fractions
    \usepackage{mathtools}     % amsmath++
    %\usepackage{amssymb}     % math symbols
    %\usepackage{amsmath}
    \usepackage{relsize}
    \usepackage{caption}
    \captionsetup{belowskip=6pt,aboveskip=2pt} % to save space.
    %\usepackage{subcaption}
    % \usepackage{multicolumn}
    \usepackage[]{inputenc}
    \usepackage{xfrac}
    \RequirePackage{graphicx,color}
    \usepackage[font={small}]{subfig} % subfig, 4 figures in a row
    \usepackage{pifont}
    \usepackage{footnote} % show footnotes in tables
    \makesavenoteenv{table}
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \newcommand{\acomment}[1]{{{\color{orange} [A: #1]}}}
    \newcommand{\rcomment}[1]{{{\color{red} [R: #1]}}}
    \newcommand{\mcomment}[1]{{{\color{blue} [M: #1]}}}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\newcommand{\ourtitle}{Working title: From would-have-beens to should-have-beens: Counterfactuals in model evaluation}
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \newtheorem{problem}{Problem}
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \newcommand{\ourtitle}{Evaluating Decision Makers over Selectively Labeled Data}
    
    
    \input{macros}
    
    \usepackage{chato-notes}
    
    
    
    \title{\ourtitle}
    
    \author{Michael Mathioudakis}
    \affiliation{%
      \institution{University of Helsinki}
      \city{Helsinki} 
      \country{Finland} 
    }
    \email{michael.mathioudakis@helsinki.fi}
    
    
    \begin{abstract}
    
    Riku-Laine's avatar
    Riku-Laine committed
    %We show how a causality-based approach can be used to estimate the performance of prediction algorithms in `selective labels' settings -- with particular application to `bail-or-jail' judicial decisions.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    Increasing number of important decision affecting people's lives are being made by machine learning and AI systems. 
    We study evaluating the quality of such decision makers.
    The major difficulty in such evaluation is that existing decision makers in use, whether AI or human, influence the data the evaluation is based on. For example, when
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    deciding whether of defendant should be given bail or kept in jail, we are not able to directly observe the possible offences by defendants that the decision making system in use decides to keep in jail. To evaluate decision makers in these difficult settings, we derive a flexible Bayesian approach, that utilizes counterfactual-based imputation. Compared to previous state-of-the-art, the approach gives more accurate predictions on the decision quality with lower variance. The approach is also shown to be robust to different variations in the decision mechanisms in the data.
    
    \end{abstract}
    
    
    \begin{document}
    
    
    \fancyhead{}
    \maketitle
    
    \renewcommand{\shortauthors}{Authors}
    
    
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Introduction} 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\acomment{'Decision maker' sounds and looks much better than 'decider'! Can we use that?}
    
    %\acomment{We should be careful with the word bias and unbiased, they may refer to statistical bias of estimator, some bias in the decision maker based on e.g. race, and finally selection bias.}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
     Nowadays, a lot of decisions are being made which affect the course of human lives are being automated. In addition to lower cost, computational models could enhance the decision-making process in accuracy and fairness. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    The advantage of using models does not necessarily lie in pure performance, that a machine can make more decisions, but rather in that a machine can give bounds for uncertainty and can learn from a vast set of information and that with care, a machine can be made as unbiased as possible.
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    However, before deploying any decision making algorithms, they should be evaluated to show that they actually improve on previous, often human, decision-making: a judge, a doctor, ... who makes the decisions on which outcome labels are available.  This evaluation far from trivial.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %Although, evaluating algorithms in conventional settings is trivial, when (almost) all of the labels are available, numerous metrics have been proposed and are in use in multiple fields.
    Specifically, `Selective labels' settings arise in situations where data are the product of a decision mechanism that prevents us from observing outcomes for part of the data \cite{lakkaraju2017selective}. As a typical example, consider bail-or-jail decisions in judicial settings: a judge decides whether to grant bail to a defendant based on whether the defendant is considered likely to violate bail conditions while awaiting trial -- and therefore a violation might occur only in case bail is granted. Naturally similar scenarios are observed throughout many applications from ecnomy or medicine.
    %For example, given a data set of bail violations and bail/jail decision according some background factors, there will never be bail violations on those subjects kept in jail by the current decision making mechanism, hence the evaluation of a decision bailing such subjects is left undefined.
    
    Such settings give rise to questions about the effect of alternative decision mechanisms  -- e.g., `how many defendants would violate bail conditions if more bail decisions were granted?'.  In other words, one faces the challenge to estimate the performance of an alternative, potentially automated, decision policy that might make different decisions than the ones found in the existing data.
    
    
    %In settings like judicial bail decisions, some outcomes cannot be observed due to the nature of the decisions. 
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    This can be seen as a complicated missing data problem where the missingness of an item is connected with its outcome and where the available labels aren't a random sample of the true population. Lakkaraju et al. recently name this  the selective labels problem \cite{lakkaraju2017selective}. Selective labels issue has been addressed in the causal inference literature by discussing selection bias. discussion has mainly been concentrated on recovering causal effects + model structure has usually been different (Pearl, Bareinboim etc.)Pearl calls missing the outcome under an alternative decision the 'fundamental problem' in causal inference \cite{bookofwhy}.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    Recently, Lakkaraju et al. presented method for evaluation of decision making mechanisms called contraction \cite{lakkaraju2017selective}. It assumes the subjects are random assigned to decision makers with a given leniency level. Due to the assumptions, an estimate of the performance can be obtained by essentially considering the most lenient decision maker. It was shown to perform well compared to other methods previously presented. 
    
    For contraction to work, we need lenient decision makers making decision on a large number of subjects. If there is need have a better decision maker, there may not be sufficiently lenient decision makers. Furthermore, evaluating only on one decision maker when possibly many are present may produce estimates with higher variation. In reality, the decision maker in the data is not perfect but e.g. biased. Our aim is to develop a method that overcome these challenges and limitations.
    
    
    In this paper we propose a (novel modular) framework to provide a systematic way of evaluation decision makers from selectively labeled data. Our approach that is based on imputing the missing labels using counterfactual reasoning.  We also build on Jung et al. who present a method for constructing optimal policies, we show that that approach can also be applied to the selective labels setting \cite{jung2018algorithmic}. 
    %to evaluate the performance of predictive models in settings where selective labeling and latent confounding is present. We use theory of counterfactuals and causal inference to formally define the problem.
    We define a flexible, Bayesian approach and the inference is performed with the latest tools.
    
    
    
    %\begin{itemize}
    %\item What we study
    %	\begin{itemize}
    %		\item We studied methods to evaluate the performance of predictive algorithms/models when the historical data suffers from selective labeling and unmeasured confounding.
    %	\end{itemize}
    %item Motivation for the study%
    %	\begin{itemize}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    %		\item %Fairness has been discussed in the existing literature and numerous publications are available for interested readers. Our emphasis on this paper is on pure performance, getting the predictions accurate.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %		\item 
    %		\item %
    %	\end{itemize}
    %\item Present the setting and challenge:
    %	\begin{itemize}
    %		\item 
    %
    	%	\item 
    %		\item %Characteristically, in many of the settings the decisions hiding the outcomes are made by different deciders
    %		\item Labels are missing non-randomly, decisions might be made by different deciders who differ in leniency.
    %		\item So this might lead to situation where subjects with same characteristics may be given different decisions due to the differing leniency.
    %		\item Of course the differing decisions might be attributable to some unobserved information that the decision-maker might have had available ude to meeting with the subject.
    %		\item %The explainability of black-box models has been discussed in X. We don't discuss fairness.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %		\item In settings like judicial bail decisions, some outcomes cannot be observed due to the nature of the decisions. This results in a complicated missing data problem where the missingness of an item is connected with its outcome and where the available labels aren't a random sample of the true population. Recently this problem has been named the selective labels problem.
    	%\end{itemize}
    %\item 
    %Related work
    	%\begin{itemize}
    %		\item In %the original paper, Lakkaraju et al. presented contraction which performed well compared to other methods previously presented in the literature. 
    %		\item We wanted to benchmark our approach to that and show that we can improve on their algorithm in terms of restrictions and accuracy. 
    %		\item %Restrictions = our method doesn't have so many assumptions (random assignments, agreement rate, etc.) and can estimate the performance on all levels of leniency despite the judge with the highest leniency. See fig 5 from Lakkaraju
    %		\item 
    %		\item They didn't have selective labeling nor did they consider that the judges would differ in leniency.%
    %		\item 
    %		\item %Latent confounding has bee discussed by X when discussing the effect of latent confounders to ORs. ec etc.
    %	\end{itemize}
    %\item Our contribution
    %	\begin{itemize}
    %	\item 
    
    %	\end{itemize}
    %\end{itemize}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \section{The Selective Labels Framework}
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    We begin by formalizing the selective labels setting.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    Let binary variable $T$ denote a decision, where $T=1$ is interpreted as a positive decision. The binary variable $Y$ measures some outcome that is affected by the decision $T$.  The selective labels issue is that in the observed data when $T=0$ then deterministically\footnote{Alternatively, we could see it as not observing the value of $Y$ when $T=0$ inducing a problem of selection bias.\acomment{Want to keep this interpretation in the footnote not to interfere with the main interpretation.}} $Y=1$.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    For example, consider that
    $T$ denotes a decision to jail $T=0$ or bail $T=1$. 
    Outcome $Y=0$ then marks that a defendant offended and $Y=1$ the defendant did not. When a defendant is jailed $T=0$ the defendant obviously did not violate the bail and thus always $Y=1$.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \subsection{Decision Makers}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    A decision maker $D(r)$ makes the decision $T$ based on the characteristics of the subject. We assume the decision maker gets input leniency $r$, which defines what percentage of subjects the decision maker makes a positive decision for. A decision maker may be human or a machine learning system. They seek to predict outcome $Y$ based on what they know and then decide $T$ based on this prediction: a negative decision $T=0$ is prefered for subjects predicted to have negative outcome $Y=0$ and a positive decision $T=1$ when the outcome is predicted as positive $Y=1$.  
    
    
    % We especially consider machine learning system that need to use similar data as used for the evaluation; they also need to take into account the selective labels issue.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    In the bail or jail example, a decision maker seeks to jail $T=0$ all dangerous defendants that would violate their bail ($Y=0$), but let out the defendants that will not violate their bail. The leniency $r$ refers to the portion of bail decisions.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    The difference between the decision makers in the data and $D(r)$ is that usually we cannot observe all the information that has been available to the decision makers in the data.
    % In addition, we usually cannot observe the full decision-making process of the decider in the data step contrary to the decider in the modelling step.
    With unobservables we refer to some latent, usually non-written information regarding a certain outcome that is only available to the decision-maker. For example, a judge in court can observe the defendant's behaviour and level of remorse which might be indicative of bail violation. We denote the latent information regarding a person's guilt with variable \unobservable.
    
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \subsection{Evaluating Decision Makers}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
     The goodness of a decision maker can be examined as follows. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %Acceptance rate (AR) is the number of positive decisions ($T=1$) divided by the number of all decisions. 
    %DO WE NEED ACCEPTANCE RATE ANY MORE 
    Failure rate (FR) is the number of undesired outcomes ($Y=0$) divided by the number of all decisions. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    % One special characteristic of FR in this setting is that a failure can only occur with a positive decision ($T=1$).
    %That means that a failure rate of zero can be achieved just by not giving any positive decisions but that is not the ultimate goal.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    A good decision maker achieves as low failure rate FR  as possible, for any leniency level. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    However, the data we have does not directly provide a way to evaluate FR. If a decision maker decides $T=1$ for a subject that had $T=0$ in the data, the outcome $Y$ recorded in the data is based on the decision $T=0$ and hence $Y=1$ regardless of the decision taken by $D$. The number of negative outcomes $Y=0$ for these decision needs to be calculated in some non-trivial way.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    In the example situation the difficulty is occurs when a decision maker decides to bail $T=0$ a defendant that has been jailed in the data, we cannot directly observe whether the defendant was about to offend or not.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    Therefore, the aim is here to give an estimate of the FR at any given AR for any decision maker $D$, formalized as follows:
    \begin{problem}
    Given selectively labeled data, and a decision maker $D(r)$, give an estimate of the failure rate FR for any leniency $r$.
    \end{problem}
    \noindent
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    The estimate of the evaluator should also be accurate for all levels of leniency.
    This estimate is vital in the employment machine learning and AI systems to every day use. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    % Given the selective labeling of data and the latent confounders present, our goal is to create an evaluator module that can output a reliable estimate of a given decider module's performance. We use acceptance rate and failure rate as measures against which we compare our evaluators because they have direct and easily understandable counterparts in the real world / applicable domains. The evaluator module should be able to accurately estimate the failure rate for all levels of leniency and all data sets.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %The "eventual goal" is to create such an evaluator module that it can outperform (have a lower failure on all levels of acceptance rate) the deciders in the data generating process. The problem is of course comparing the performance of the deciders. We try to address that.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \subsection{Causal Modeling}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \begin{figure}
        \begin{tikzpicture}[->,>=stealth',node distance=1.5cm, semithick]
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
      \tikzstyle{every state}=[fill=none,draw=black,text=black]
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
      \node[state] (R)                    {$R$};
      \node[state] (X) [right of=R] {$X$};
      \node[state] (T) [below of=X] {$T$};
      \node[state] (Z) [rectangle, right of=X] {$Z$};
      \node[state] (Y) [below of=Z] {$Y$};
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
      \path (R) edge (T)
            (X) edge (T)
    	     edge (Y)
            (Z) edge (T)
    	     edge (Y)
            (T) edge (Y);
    \end{tikzpicture}
    \caption{ $R$ leniency of the decision maker, $T$ is a binary decision,  $Y$ is the outcome that is selectively labled. Background features  $X$ for a subject affect the decision and the outcome. Additional background features  $Z$ are visible only to the decision maker in use. }\label{fig:model}
    \end{figure}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    We model the selective labels setting as summarized by Figure~\ref{fig:model}\cite{lakkaraju2017selective}.
    
    The outcome  $Y$ is affected by the observed background factors $X$, unobserved background factors $Z$. These background factors also influence the decision $T$ taken in the data. Hence $Z$ includes information that was used by the decision maker in the data but that is not available to us as observations.
     In addition, there may be other background factors that affect $Y$ but not $T$. In addition, we assume the decision is affected by some observed leniency level $R \in [0,1]$ of the decision maker.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    We use a propensity score framework to model $X$ and $Z$: they are assumed continuous Gaussian variables, with the interpretation that they represent summarized risk factors such that higher values denote higher risk for a negative outcome ($Y=0$). Hence the Gaussianity assumption here is motivated by the central limit theorem.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \acomment{Not sure if this is good to discuss here or in the next section: if we would like the next section be full of our contributions and not lakkarajus, we should place it here.}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\setcounter{section}{1}
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\section{ Framework ( by Riku)}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %In this section, we define the key terms used in this paper, present the modular framework for selective labels problems and state our problem.
    %Antti: In conference papers we do not waste space for such in this paper stuff!! In journals one can do that.
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\begin{itemize}
    %\item Definitions \\
    %	In this paper we apply our approach on binary (positive / negative) outcomes, but our approach is readily extendable to accompany continuous or categorical responses. Then we could use e.g. sum of squared errors or other appropriate metrics as the measure for good performance.
    %	With positive or negative outcomes we refer to...
    	%\begin{itemize}
    	%\item Failure rate
    %		\begin{itemize}
    %		\item %Failure rate (FR) is defined as the ratio of undesired outcomes to given decisions. One special characteristic of FR in this setting is that a failure can only occur with a positive decision / we can only observe the outcome when the corresponding decision is positive.
    %		\item %That means that a failure rate of zero can be achieved just by not giving any positive decisions but that is not the ultimate goal. (rather about finding a good balance. > Resource issues in prisons etc.)
    		%\end{itemize}
    %	\item Acceptance rate
    		%\begin{itemize}
    %		\item %Acceptance rate (AR) or leniency is defined as the ratio of positive decisions to all decisions that a decision-maker will give. (Semantically, what is the difference between AR and leniency? AR is always computable, leniency doesn't manifest.) A: a good question! can we get ir of one
    		%\item
    		
    		% In some settings, (justice, medicine) people might want to find out if X\% are accepted what is the resulting failure rate, and what would be the highest acceptance rate to have to have the failure rate at an acceptable level. 
    %		\item We want to know the trade-off between acceptances and failure rate.
    %		\item %Lakkaraju mentioned the problem in the data that judges which have a higher leniency have labeled a larger portion of the data (which might results in bias).
    %		\item As mentioned earlier, these differences in AR might lead to subjects getting different decisions while haven the same observable and unobservable characteristics.
    		%\end{itemize}
    %	\item % Some deciders might have an incentive for positive decisions if it can mean e.g. savings. Judge makes saving by not jailing a defendant. Doctor makes savings by not assigning patient for a higher intensity care. (move to motivation?)
    	%\item 
    %\end{itemize}
    %\begin{itemize}
    %\item Modules \\
    %	We separated steps that modify the data into separate modules to formally define how they work. With observational data sets, the data goes through only a modelling step and an evaluation step. With synthetic data, we also need to define a data generating  step. We call the blocks doing these steps {\it modules}. To fully define a module, one must define its input and output. Modules have different functions, inputs and outputs. Modules are interchangeable with a similar type of module if they share the same input and output (You can change decider module of type A with decider module of type B). With this modular framework we achieve a unified way of presenting the key differences in different settings.
    %	\begin{itemize}
    %	\item Decider modules
    		%\begin{itemize}
    %		\item In general, the decider module assigns predictions to the observations based on some information.
    %		\item %The information available to a decision-maker in the decider module includes observable and -- possibly -- unobservable features, denoted with X and Z respectively.
    %		\item %The predictions given by a decider module can be relative or absolute. With relative predictions we refer to that a decider module can give out a ranking of the subjects based on their predicted tendency towards an outcome. Absolute predictions can be either binary or continuous in nature. For example, they can correspond to yes or no decisions or to a probability value.
    %		\item %Inner workings (procedure/algorithm) of the module may or may not be known. In observational data sets, the mechanism or the decider which has labeled the data is usually unknown. E.g. we do not -- eactly -- know how judges obtain a decision. Conversely, in synthetic data sets the procedure creating the decisions is fully known because we define the process.
    %		\item The decider (module) in the data step has unobservable information available for making the decisions. 
    %		\item %The behaviour of the decider module in the data generating step can be defined in many ways. We have used both the method presented by Lakkaraju et al. and two methods of our own. We created these two deciders to remove the interdependencies of the decisions made by the decider Lakkaraju et al. presented.
    %		\item 		\end{itemize}
    %	\item Evaluator modules
    		%\begin{itemize}
    %		\item Evaluator module gets the decisions, observable features of the subject and predictions made by the deciders and outputs an estimate of...
    %		\item The evaluator module outputs a reliable estimate of a decider module's performance. The estimate is created by the evaluator module and it should 
    		%	\begin{itemize}
    %		%	\item be precise and unbiased
    %			\item have a low variance
    %			\item be as robust as possible to slight changes in the data generation. 
    		%	\end{itemize}
    %		\item The estimate of the evaluator should also be accurate for all levels of leniency.
    		%\end{itemize}
    %	\end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    %\item Example: in observational data sets, the deciders have already made decision concerning the subjects and we have a selectively labeled data set available. In the modular framework we refer to the actions of the human labelers as a decider module which has access to latent information. 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\item Problem formulation \\
    
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    
    %The "eventual goal" is to create such a decider module that it can outperform (have a lower failure on all levels of acceptance rate) the deciders in the data generating process. The problem is of course comparing the performance of the deciders. We try to address that.
    
    %(It's important to somehow keep these two different goals separate.)
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %We show that our method is robust against violations and modifications in the data generating mechanisms.
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    %\end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Counterfactual-Based Imputation For Selective Labels}
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \acomment{This chapter should be our contributions. One discuss previous results we build over but one should consider putting them in the previous section.}
    
    
    \acomment{We need to start by noting that with a simple example how we assume this to work. If X indicates a safe subject that is jailed, then we know that (I dont know how this applies to other produces) that Z must have indicated a serious risk. This makes $Y=0$ more likely than what regression on $X$ suggests.}
    
    
    \acomment{I do not understand what we are doing from this section. It needs to be described ASAP.}
    
    
    Riku-Laine's avatar
    Riku-Laine committed
    \begin{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \item Theory \\ (Present here (1) what counterfactuals are, (2) motivation for structural equations, (3) an example or other more easily approachable explanation of applying them, (4) why we used computational methods)
    	\begin{itemize}
    	\item Counterfactuals are 
    		\begin{itemize}
    		\item hypothesized quantities that encode the would-have-been relation of the outcome and the treatment assignment.
    		\item Using counterfactuals, we can discuss hypothetical events that didn't happen. 
    
    		\item Using counterfactuals requires defining a structural causal model.
    		\item Pearl's Book of Why: "The fundamental problem"
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{itemize}
    	\item By defining structural equations / a graph
    		\begin{itemize}
    		\item we can begin formulating causal questions to get answers to our questions.
    
    		\item Once we have defined the equations, counterfactuals are obtained by... (abduction, action, prediction, don't we apply the do operator on the \decision, so that we obtain $\outcome_{\decision=1}(x)$?)
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\item We denote the counterfactual "Y had been y had T been t" with...
    		\item By first estimating the distribution of the latent variable Z we can impose 
    		\item Now counterfactuals can be defined as
    			\begin{definition}[Unit-level counterfactuals \cite{pearl2010introduction}]
    			Let $M$ be a structural model and $M_x$ a modified version of $M$, with the equation(s) of $X$ replaced by $X = x$. Denote the solution for $Y$ in the equations of $M_x$ by the symbol $Y_{M_x}(u)$. The counterfactual $Y_x(u)$ (Read: "The value of Y in unit u, had X been x") is given by:
    			\begin{equation} \label{eq:counterfactual}
    				Y_x(u) := Y_{M_x}(u)
    			\end{equation}
    			\end{definition}
    		\end{itemize}
    	\item In a high level
    		\begin{itemize}
    		\item there is usually some data recoverable from the unobservables. For example, if the observable attributes are contrary to the outcome/decision we can claim that the latent variable included some significant information.
    
    		\item We retrieve this information using the prespecified structural equations. After estimating the desired parameters, we can estimate the value of the counterfactual (not observed) outcome by switching the value of \decision and doing the computations through the rest of the graph...
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{itemize}
    
    	\item Because the causal effect of \decision to \outcome is not identifiable, we used a Bayesian approach
    
    Riku-Laine's avatar
    Riku-Laine committed
    	\item Recent advances in the computational methods provide us with ways of inferring the value of the latent variable by applying Bayesian techniques to... Previously this kind of analysis required us to define X and compute Y...
    
    Riku-Laine's avatar
    Riku-Laine committed
    \end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \item Model (Structure, equations in a general and more specified level, assumptions, how we construct the counterfactual...) 
    	\begin{itemize}
    
    	\item Structure is as is in the diagram. Square around Z represents that it's unobservable/latent.
    
    Riku-Laine's avatar
    Riku-Laine committed
    	The features of the subjects include observable and -- possibly -- unobservable features, denoted with X and Z respectively. The only feature of a decider is their leniency R (depicting some baseline probability of a positive decision). The decisions given will be denoted with T and the resulting outcomes with Y, where 0 stands for negative outcome or decision and 1 for positive.
    	\item The causal diagram presents how decision T is affected by the decider's leniency (R), the subject's observable private features (X) and the latent information regarding the subject's tendency for a negative outcome (Z). Correspondingly the outcome (Y) is affected only by the decision T and the above-mentioned features X and Z. 
    	\item The causal directions and implied independencies are readable from the diagram. We assume X and Z to be independent.
    	\item The structural equations connecting the variables can be formalized in a general level as (see Jung)
    		\begin{align} \label{eq:structural_equations}
    
    		\outcome_0 & = NA \\ \nonumber
    		\outcome_1 & \sim f(\featuresValue, \unobservableValue; \beta_{\featuresValue\outcomeValue}, \beta_{\unobservableValue\outcomeValue}) \\ \nonumber
    		\decision      & \sim g(\featuresValue, \unobservableValue; \beta_{\featuresValue\decisionValue}, \beta_{\unobservableValue\decisionValue}, \alpha_j) \\ \nonumber
    		\outcome & =\outcome_\decisionValue \\ \nonumber
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{align}
    	where the beta and alpha coefficients are the path coefficients specified in the causal diagram
    	\item This general formulation of the selective labels problem enables the use of this approach even when the outcome is not binary. Notably this approach -- compared to that of Jung et al. -- explicates the selective labels issue to the structural equations when we deterministically set the value of outcome y to be one in the event of a negative decision. In addition, we allow the judges to differ in the baseline probabilities for positive decisions, which is by definition leniency.
    	\item Now by imposing a value for the decision \decision we can obtain the counterfactual by simply assigning the desired value to the equations in \ref{eq:structural_equations}. This assumes that... (Consistency constraint) Now we want to know {\it what would have been the outcome \outcome for this individual \featuresValue had the decision been $\decision = 1$, or more specifically $\outcome_{\decision = 1}(\featuresValue)$}.
    	\item To compute the value for the counterfactuals, we need to obtain estimates for the coefficients and latent variables. We specified a Bayesian (/structural) model, which requires establishing a set of probabilistic expressions connecting the observed quantities to the parameters of interest. The relationships of the variables and coefficients are presented in equation \ref{eq:structural_equations} and figure X in a general level. We modelled the observed data as  
    		\begin{align} \label{eq:data_model}
    
    		 y(1) & \sim \text{Bernoulli}(\invlogit(\beta_{xy} x + \beta_{zy} z)) \\ \nonumber
    		 t & \sim \text{Bernoulli}(\invlogit(\alpha_{j} + \beta_{xt} x + \beta_{zt}z)). \\ \nonumber
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{align}
    	\item Bayesian models also require the specification of prior distributions for the variables of interest to obtain an estimate of their distribution after observations, the posterior distribution.
    	\item Identifiability of models with unobserved confounding has been discussed by eg McCandless et al and Gelman. As by Gelman we note that scale-invariance has been tackled with specifying the priors.  (?)
    	\item Specify, motivate and explain priors here if space.
    	\end{itemize}
    \item Computation (Stan in general, ...)
    	\begin{itemize}
    
    	\item Using the model specified in equation \ref{eq:data_model}, we used Stan to estimate the intercepts, path coefficients and latent variables. Stan provides tools for efficient computational estimates of posterior distributions.  Stan uses No-U-Turn Sampling (NUTS), an extension of Hamiltonian Monte Carlo (HMC) algorithm, to computationally estimate the posterior distribution for inferences. (In a high level, the sampler utilizes the gradient of the posterior to compute potential and kinetic energy of an object in the multi-dimensional surface of the posterior to draw samples from it.) Stan also has implementations of black-box variational inference algorithms and direct optimization algorithms for the posterior distribution but they were deemed to be insufficient for estimating the posterior in this setting
    
    Riku-Laine's avatar
    Riku-Laine committed
    	\item Chain lengths were set to X and number of chains deployed was Y. (Explain algorithm fully later)
    	\end{itemize}
    \end{itemize}
    
    
    \begin{algorithm}
    	%\item Potential outcomes / CBI \acomment{Put this in section 3? Algorithm box with these?}
    		\begin{itemize}
    		\item Take test set
    		\item Compute the posterior for parameters and variables presented in equation \ref{eq:data_model}.
    		\item Using the posterior predictive distribution, draw estimates for the counterfactuals.
    		\item Impute the missing outcomes using the estimates from previous step
    		\item Obtain a point estimate for the failure rate by computing the mean.
    		\item Estimates for the counterfactuals Y(1) for the unobserved values of Y were obtained using the posterior expectations from Stan. We used the NUTS sampler to estimate the posterior. When the values for...
    		\end{itemize}
    	
    \caption{Counterfactual based imputation}	\end{algorithm}
    
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Extension To Non-Linearity (2nd priority)}
    
    
    % If X has multiple dimensions or the relationships between the features and the outcomes are clearly non-linear the presented approach can be extended to accomodate non-lineairty. Jung proposed that... Groups... etc etc.
    
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Related work}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \begin{itemize}
    \item Lakkaraju and contraction. \cite{lakkaraju2017selective}
    
    	\item Contraction
    		\begin{itemize}
    		\item Algorithm by Lakkaraju et al. Assumes that the subjects are assigned to the judges at random and requires that the judges differ in leniency. 
    		\item Can estimate the true failure only up to the leniency of the most lenient decision-maker.
    		\item Performance is affected by the number of people judged by the most lenient decision-maker, the agreement rate and the leniency of the most lenient decision-maker. (Performance is guaranteed / better when ...)
    		\item Works only on binary outcomes
    		\item (We show that our method isn't constrained by any of these)
    		\item The algorithm goes as follows...
    %\begin{algorithm}[] 			% enter the algorithm environment
    %\caption{Contraction algorithm \cite{lakkaraju17}} 		% give the algorithm a caption
    %\label{alg:contraction} 			% and a label for \ref{} commands later in the document
    %\begin{algorithmic}[1] 		% enter the algorithmic environment
    %\REQUIRE Labeled test data $\D$ with probabilities $\s$ and \emph{missing outcome labels} for observations with $T=0$, acceptance rate r
    %\ENSURE
    %\STATE Let $q$ be the decision-maker with highest acceptance rate in $\D$.
    %\STATE $\D_q = \{(x, j, t, y) \in \D|j=q\}$
    %\STATE \hskip3.0em $\rhd$ $\D_q$ is the set of all observations judged by $q$
    %\STATE
    %\STATE $\RR_q = \{(x, j, t, y) \in \D_q|t=1\}$
    %\STATE \hskip3.0em $\rhd$ $\RR_q$ is the set of observations in $\D_q$ with observed outcome labels
    %\STATE
    %\STATE Sort observations in $\RR_q$ in descending order of confidence scores $\s$ and assign to $\RR_q^{sort}$.
    %\STATE \hskip3.0em $\rhd$ Observations deemed as high risk by the black-box model $\mathcal{B}$ are at the top of this list
    %\STATE
    %\STATE Remove the top $[(1.0-r)|\D_q |]-[|\D_q |-|\RR_q |]$ observations of $\RR_q^{sort}$ and call this list $\mathcal{R_B}$
    %\STATE \hskip3.0em $\rhd$ $\mathcal{R_B}$ is the list of observations assigned to $t = 1$ by $\mathcal{B}$
    %\STATE
    %\STATE Compute $\mathbf{u}=\sum_{i=1}^{|\mathcal{R_B}|} \dfrac{\delta\{y_i=0\}}{| \D_q |}$.
    %\RETURN $\mathbf{u}$
    %\end{algorithmic}
    %\end{algorithm}
    		\end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \item Counterfactuals/Potential outcomes. \cite{pearl2010introduction} (also Rubin)
    \item Approach of Jung et al for optimal policy construction. \cite{jung2018algorithmic}
    \item Discussions of latent confounders in multiple contexts.
    \item Imputation methods and other approaches to selective labels, eg. \cite{dearteaga2018learning}
    \end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Experiments}
    
    Riku-Laine's avatar
    Riku-Laine committed
    In this section we present our results from experiments with synthetic and realistic data. We show that our approach provides the best estimates for evaluating the performance of a predictive model on all levels of leniency.
    
    Riku-Laine's avatar
    Riku-Laine committed
    \subsection{Synthetic data}
    
    \rcomment{ I presume MM's preferences were that the outcome would be from Bernoulli distribution and that the decisions would be independent. So, let's first explain those ways thoroughly and then mention what we changed as discussed.}
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    
    We experimented with synthetic data sets to examine accurateness, unbiasedness and robustness to violations of the assumptions. 
    
    
    We sampled $N=50k$ samples of  $X$, $Z$, and $W$ as independent standard Gaussians.  We then drew the outcome $Y$ from a Bernoulli distribution with parameter $p = 1 - \invlogit(\beta_xx+\beta_zz+\beta_ww)$ so that $P(Y=0|X, Z, W) =  \invlogit(\beta_xx+\beta_zz+\beta_ww)$ where the coefficients for X, Z and W were set to $1$, $1$, $0.2$ respectively.  We sampled $50$ leniency levels $R$ uniformly from $[0,1]$. We assigned the randomly subjects such that a single leniency level was assigned  for $1000$ subjects. In the example, this mimics having 50 judges deciding each for $1000$ defendants. The data was divided in half to form a training set and a test set. This process follows the suggestion of Lakkaraju et al. \cite{lakkaraju2017selective}. \acomment{Check before?}
    
    %This is one data generation module.
    % It can be / was modified by changing the outcome producing mechanism. For other experiments we changed the outcome generating mechanism so that the outcome was assigned value 1 if
    
    
    The \emph{default} decision maker in the data fits a logistic regression model $Y \sim  \invlogit(\beta_xx+\beta_zz)$ using the training set. The decisions were assigned by computing the quantile the subject belongs to. The quantile was obtained as the inverse cdf of ... . 
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
     $T=1$ to $R$ percent of subjects given by the leniency with highest probability of $Y=1$ in the test set. For all subjects for which $T=0$ we set $Y=1$.
    
     We used a number of different decision mechanism. A \emph{limited} works as the default but uses regression model $Y \sim  \invlogit(\beta_xx)$. Hence it is unable to observe $Z$. 
    A \emph{biased} decision maker works similarly as limited but the logistic regression model is .. where biases decision.
    Given leniency $R$, a \emph{random} decision maker decides on $T=1$ probability given by $R$.
    
    
    In contrast, Lakkaraju et al. essentially order the subjects and decide $T=1$ with the percentage given by the leniency $R$. We see this as unrealistic: the decisions 
    
    on a subject should not depend on the decision on other subject. In the example this would induce unethical behaviour: a single judge would need to jail defendant today in order to release a defendant tomorrow.
    We treat the observations as independent and the still the leniency would be a good estimate of the acceptance rate. The acceptance rate converges to the leniency. \acomment{As a reviewer I would perhaps ask to see the results for the Lakkaraju mechanism.}
    
     
     %This is a decider module. We experimented with different combinations of decider and data generating modules to show X / see Y. (to see that our method is robust against non-informative, biased and bad decisions . Due to space constraints we defer these results...)
    
    Antti Hyttinen's avatar
    Antti Hyttinen committed
    \paragraph{Evaluators} 
    
    Riku-Laine's avatar
    Riku-Laine committed
    	We deployed multiple evaluator modules to estimate the true failure rate of the decider module. The estimates should be close to the true evaluation evaluator modules estimates and the estimates will eventually be compared to the human evaluation curve. 
    	\begin{itemize}
    	\item True evaluation
    		\begin{itemize}
    
    		\item Depicts the true performance of the model. "How well would this model perform had it been deployed?" 
    		\item Not available when using observational data. 
    		\item Calculated by ordering the observations based on the predictions from the black-box model B and counting the failure rate from the ground truth labels.
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{itemize}
    	\item Human evaluation
    		\begin{itemize}
    		\item The performance of the deciders in the data generation step. We binned deciders with similar values of leniency and counted their failure rate.
    		\item In observational data sets, we can only record the decisions and acceptance rates of these decision-makers. 
    		\item This curve is eventually the benchmark for the performance of a model.
    		\end{itemize}
    	\item Labeled outcomes
    		\begin{itemize}
    		\item Vanilla estimator of a model's performance. Obtained by first ordering the observations by the predictions assigned by the decider in the modelling step.
    		\item Then 1-r \% of the most dangerous are detained and given a negative decision. The failure rate is computed as the ratio of negative outcomes to the number of subjects.
    		\end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    	\end{itemize}
    
    \paragraph{Results} 
    
    Riku-Laine's avatar
    Riku-Laine committed
    (Target for this section from problem formulation: show that our evaluator is unbiased/accurate (show mean absolute error), robust to changes in data generation (some table perhaps, at least should discuss situations when the decisions are bad/biased/random = non-informative or misleading), also if the decider in the modelling step is bad and its information is used as input, what happens.)
    	\begin{itemize}
    	\item Accuracy: we have defined two metrics, acceptance rate and failure rate. In this section we show that our method can accurately restore the true failure on all acceptance rates with low mean absolute error. As figure X shows are method can recover the true performance of the predictive model with good accuracy. The mean absolute errors w.r.t the true evaluation were 0.XXX and 0.XXX for contraction and CBI approach respectively. 
    	\item In figure X we also present how are method can track the true evaluation curve with a low variance.
    	\end{itemize}
    
    %\end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    
    \subsection{Realistic data}
    In this section we present results from experiments with (realistic) data sets. 
    
    \begin{itemize}
    \item COMPAS data set
    	\begin{itemize}
    	\item Size, availability, COMPAS scoring
    		\begin{itemize}
    		\item COMPAS = Correctional Offender Management Profiling for Alternative Sanctions is Northpointe's (now diff. name) tool for guiding decisions in the criminal justice system.
    		\item COMPAS general recidivism risk score is made to predict recidivism in the following two years,
    
    		\item The final data set comprises of 6172 subjects assessed at Broward county, California. The data was preprocessed to include only subjects assessed at the pretrial stage and (something about traffic charges).
    		\item Data was made available ProPublica.
    		\item Their analysis and results are presented in the original article "Machine Bias" in which they argue that the COMPAS metric assigns biased risk evaluations based on race.
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\item Data includes the subjects' demographic information (incl. gender, age, race) and information on their previous offences. 
    		\end{itemize}
    	\item Subsequent modifications for analysis 
    		\begin{itemize}
    		\item We created 9 synthetic judges with leniencies 0.1, 0.2, ..., 0.9. 
    		\item Subjects were distributed to all the judges evenly and at random to enable comparison to contraction method
    		\item We employed similar decider module as explained in Lakkaraju's paper, input was the COMPAS Score 
    		\item As the COMPAS score is derived mainly from "prior criminal history, criminal associates, drug involvement, and early indicators of juvenile delinquency problems" so it can be said to have external information available, not coded into the four above-mentioned variables. (quoted text copy-pasted from here)
    		\item Data was split to test and training sets
    		\item A logistic regression model was built to predict two-year recidivism from categorized age, gender, the number of priors, degree of crime COMPAS screened for (felony/misdemeanor)
    
    		\item We used these same variables as input to the CBI evaluator.
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\end{itemize}
    	\item Results
    		\begin{itemize}
    
    		\item Results from this analysis are presented in figure X. In the figure we see that CBI follows the true evaluation curve very closely.
    
    Riku-Laine's avatar
    Riku-Laine committed
    		\item We can also deduce from the figure that if this predictive model was to be deployed, it wouldn't necessarily improve on the decisions made by these synthetic judges.
    		\end{itemize}
    	\end{itemize}
    \item Catalonian data (this could just be for our method? Hide ~25\% of outcome labels and show that we can estimate the failure rate for ALL levels of leniency despite the leniency of this one judge is only 0.25) (2nd priority)
    	\begin{itemize}
    	\item Size, availability, RisCanvi scoring
    	\item Subsequent modifications for analysis
    	\item Results
    	\end{itemize}
    \end{itemize}
    
    Riku-Laine's avatar
    Riku-Laine committed
    \section{Discussion}
    
    \begin{itemize}
    \item Conclusions 
    \item Future work / Impact
    \end{itemize}
    
    
    % \textbf{Acknowledgments.}
    %The computational resources must be mentioned. 
    
    
    %\clearpage
    % \balance
    \bibliographystyle{ACM-Reference-Format}
    \bibliography{biblio}
    %\balancecolumns % GM June 2007
    
    \end{document}