Skip to content
Snippets Groups Projects
Commit 0aa4118a authored by Riku-Laine's avatar Riku-Laine
Browse files

Fig updates, hierarchihcal model to section 7.6

parent 41284469
No related branches found
No related tags found
No related merge requests found
......@@ -35,6 +35,7 @@
\newcommand{\M}{\mathcal{M}} % "fancy M"
\newcommand{\B}{\mathcal{B}} % "fancy B"
\newcommand{\RR}{\mathcal{R}} % supistusalgon R
\newcommand{\invlogit}{\text{logit}^{-1}} % supistusalgon R
\renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\textnormal{#1}}
......@@ -235,7 +236,7 @@ Given the above framework, the goal is to create an evaluation algorithm that ca
\end{figure}
\section{Modular framework -- based on 19 June discussion} \label{sec:modular_framework}
\section{Modular framework -- 19 June discussion} \label{sec:modular_framework}
\emph{Below is the framework as was written on the whiteboard, then RL presents his own remarks on how he understood this.}
......@@ -313,9 +314,9 @@ Both of the data generating algorithms are presented in this chapter.
In the setting without unobservables Z, we first sample an acceptance rate $r$ for all $M=100$ judges uniformly from a half-open interval $[0.1; 0.9)$. Then we assign 500 unique subjects for each of the judges randomly (50000 in total) and simulate their features X as i.i.d standard Gaussian random variables with zero mean and unit (1) variance. Then, probability for negative outcome is calculated as
\begin{equation} \label{eq:inv_logit}
P(Y=0|X=x) = \dfrac{1}{1+\exp(-x)}=logit^{-1}(x).
P(Y=0|X=x) = \dfrac{1}{1+\exp(-x)}=\invlogit(x).
\end{equation}
Because $P(Y=1|X=x) = 1-P(Y=0|X=x) = 1-logit^{-1}(x)$ the outcome variable Y can be sampled from Bernoulli distribution with parameter $1-logit^{-1}(x)$. The data is then sorted for each judge by the probabilities $P(Y=0|X=x)$ in descending order. If the subject is in the top $(1-r) \cdot 100 \%$ of observations assigned to a judge, the decision variable T is set to zero and otherwise to one.
Because $P(Y=1|X=x) = 1-P(Y=0|X=x) = 1-\invlogit(x)$ the outcome variable Y can be sampled from Bernoulli distribution with parameter $1-\invlogit(x)$. The data is then sorted for each judge by the probabilities $P(Y=0|X=x)$ in descending order. If the subject is in the top $(1-r) \cdot 100 \%$ of observations assigned to a judge, the decision variable T is set to zero and otherwise to one.
\begin{algorithm}[] % enter the algorithm environment
\caption{Create data without unobservables} % give the algorithm a caption
......@@ -325,8 +326,8 @@ Because $P(Y=1|X=x) = 1-P(Y=0|X=x) = 1-logit^{-1}(x)$ the outcome variable Y can
\ENSURE
\STATE Sample acceptance rates for each M judges from $U(0.1; 0.9)$ and round to tenth decimal place.
\STATE Sample features X for each $N_{total}$ observations from standard Gaussian.
\STATE Calculate $P(Y=0|X=x)=logit^{-1}(x)$ for each observation
\STATE Sample Y from Bernoulli distribution with parameter $1-logit^{-1}(x)$.
\STATE Calculate $P(Y=0|X=x)=\invlogit(x)$ for each observation
\STATE Sample Y from Bernoulli distribution with parameter $1-\invlogit(x)$.
\STATE Sort the data by (1) the judges and (2) by probabilities $P(Y=0|X=x)$ in descending order.
\STATE \hskip3.0em $\rhd$ Now the most dangerous subjects for each of the judges are at the top.
\STATE If subject belongs to the top $(1-r) \cdot 100 \%$ of observations assigned to a judge, set $T=0$ else set $T=1$.
......@@ -340,11 +341,11 @@ Because $P(Y=1|X=x) = 1-P(Y=0|X=x) = 1-logit^{-1}(x)$ the outcome variable Y can
In the setting with unobservables Z, we first sample an acceptance rate r for all $M=100$ judges uniformly from a half-open interval $[0.1; 0.9)$. Then we assign 500 unique subjects (50000 in total) for each of the judges randomly and simulate their features X, Z and W as i.i.d standard Gaussian random variables with zero mean and unit (1) variance. Then, probability for negative outcome is calculated as
\begin{equation}
P(Y=0|X=x, Z=z, W=w)=logit^{-1}(\beta_Xx+\beta_Zz+\beta_Ww)~,
P(Y=0|X=x, Z=z, W=w)=\invlogit(\beta_Xx+\beta_Zz+\beta_Ww)~,
\end{equation}
where $\beta_X=\beta_Z =1$ and $\beta_W=0.2$. Next, value for result Y is set to 0 if $P(Y = 0| X, Z, W) \geq 0.5$ and 1 otherwise. The conditional probability for the negative decision (T=0) is defined as
\begin{equation}
P(T=0|X=x, Z=z)=logit^{-1}(\beta_Xx+\beta_Zz)+\epsilon~,
P(T=0|X=x, Z=z)=\invlogit(\beta_Xx+\beta_Zz)+\epsilon~,
\end{equation}
where $\epsilon \sim N(0, 0.1)$. Next, the data is sorted for each judge by the probabilities $P(T=0|X, Z)$ in descending order. If the subject is in the top $(1-r) \cdot 100 \%$ of observations assigned to a judge, the decision variable T is set to zero and otherwise to one.
......@@ -640,7 +641,7 @@ Given our framework defined in section \ref{sec:framework}, the results presente
\subsection{Modular framework -- Monte Carlo evaluator} \label{sec:modules_mc}
For these results, data was generated either with module in algorithm \ref{alg:dg:coinflip_with_z} (drawing Y from Bernoulli distribution with parameter $\pr(Y=0|X, Z, W)$ as previously) or with module in algorithm \ref{alg:dg:threshold_with_Z} (assign Y based on the value of $logit^{-1}(\beta_XX+\beta_ZZ)$). Decisions were determined using one of the two modules: module in algorithm \ref{alg:decider:quantile} (decision based on quantiles) or \ref{alg:decider:lakkaraju} ("human" decision-maker as in \cite{lakkaraju17}). Curves were computed with True evaluation (algorithm \ref{alg:eval:true_eval}), Labeled outcomes (\ref{alg:eval:labeled_outcomes}), Human evaluation (\ref{alg:eval:human_eval}), Contraction (\ref{alg:eval:contraction}) and Monte Carlo evaluators (\ref{alg:eval:mc}). Results are presented in figure \ref{fig:modules_mc}. The corresponding MAEs are presented in table \ref{tab:modules_mc}.
For these results, data was generated either with module in algorithm \ref{alg:dg:coinflip_with_z} (drawing Y from Bernoulli distribution with parameter $\pr(Y=0|X, Z, W)$ as previously) or with module in algorithm \ref{alg:dg:threshold_with_Z} (assign Y based on the value of $\invlogit(\beta_XX+\beta_ZZ)$). Decisions were determined using one of the two modules: module in algorithm \ref{alg:decider:quantile} (decision based on quantiles) or \ref{alg:decider:lakkaraju} ("human" decision-maker as in \cite{lakkaraju17}). Curves were computed with True evaluation (algorithm \ref{alg:eval:true_eval}), Labeled outcomes (\ref{alg:eval:labeled_outcomes}), Human evaluation (\ref{alg:eval:human_eval}), Contraction (\ref{alg:eval:contraction}) and Monte Carlo evaluators (\ref{alg:eval:mc}). Results are presented in figure \ref{fig:modules_mc}. The corresponding MAEs are presented in table \ref{tab:modules_mc}.
From the result table we can see that the MAE is at the lowest when the data generating process corresponds closely to the Monte Carlo algorithm.
......@@ -704,10 +705,10 @@ We have three different kinds of data generating modules (DG modules). The DG mo
\label{tab:dg_modules}
\begin{tabular}{@{}llcc@{}}
\toprule
& & \multicolumn{2}{c}{Feature generation} \\ \midrule
& & \multicolumn{2}{c}{Feature generation} \\ \cmidrule(l){3-4}
& & \multicolumn{1}{l}{With unobservables} & \multicolumn{1}{l}{Without unobservables} \\
\multicolumn{1}{c}{\multirow{2}{*}{Outcome}} & Drawn from Bernoulli & \ref{alg:dg:coinflip_with_z} & \ref{alg:dg:coinflip_without_z} \\
\multicolumn{1}{c}{} & Assigned by threshold & \ref{alg:dg:threshold_with_Z} & \\ \cmidrule(l){2-4}
\multicolumn{1}{c}{} & Assigned by threshold & \ref{alg:dg:threshold_with_Z} & \\ \cmidrule(l){2-4} \bottomrule
\end{tabular}
\end{table}
......@@ -719,7 +720,7 @@ We have three different kinds of data generating modules (DG modules). The DG mo
\ENSURE
\FORALL{observations}
\STATE Draw $x$ from from a standard Gaussian.
\STATE Draw $y$ from Bernoulli$(1-logit^{-1}(x))$.
\STATE Draw $y$ from Bernoulli$(1-\invlogit(x))$.
\ENDFOR
\RETURN data
\end{algorithmic}
......@@ -733,7 +734,7 @@ We have three different kinds of data generating modules (DG modules). The DG mo
\ENSURE
\FORALL{observations}
\STATE Draw $x, z$ and $w$ from from standard Gaussians independently.
\IF{$logit^{-1}(\beta_Xx+\beta_Zz+\beta_Ww) \geq 0.5$}
\IF{$\invlogit(\beta_Xx+\beta_Zz+\beta_Ww) \geq 0.5$}
\STATE {Set $y$ to 0.}
\ELSE
\STATE {Set $y$ to 1.}
......@@ -751,7 +752,7 @@ We have three different kinds of data generating modules (DG modules). The DG mo
\ENSURE
\FORALL{observations}
\STATE Draw $x, z$ and $w$ from from standard Gaussians independently.
\STATE Draw $y$ from Bernoulli$(1-logit^{-1}(\beta_Xx+\beta_Zz+\beta_Ww))$.
\STATE Draw $y$ from Bernoulli$(1-\invlogit(\beta_Xx+\beta_Zz+\beta_Ww))$.
\ENDFOR
\RETURN data
\end{algorithmic}
......@@ -773,7 +774,7 @@ Below is presented the decision-maker by Lakkaraju \cite{lakkaraju17}. The decis
\ENSURE
\STATE Sample acceptance rates for each M judges from Uniform$(0.1; 0.9)$ and round to tenth decimal place.
\STATE Assign each observation to a judge at random.
\STATE Calculate $\pr(T=0|X, Z) = logit^{-1}(\beta_XX+\beta_ZZ) + \epsilon$ for each observation and attach to data.
\STATE Calculate $\pr(T=0|X, Z) = \invlogit(\beta_XX+\beta_ZZ) + \epsilon$ for each observation and attach to data.
\STATE Sort the data by (1) the judges and (2) by the probabilities in descending order.
\STATE If subject belongs to the top $(1-r) \cdot 100 \%$ of observations assigned to that judge, set $T=0$ else set $T=1$.
\STATE Set $Y=$ NA if decision is negative ($T=0$).
......@@ -781,7 +782,7 @@ Below is presented the decision-maker by Lakkaraju \cite{lakkaraju17}. The decis
\end{algorithmic}
\end{algorithm}
One discussed way of making the decisions independent was to "flip a coin at some probability". An implementation of that idea is presented below in algorithm \ref{alg:decider:coinflip}. As $\pr(T=0|X, Z) = logit^{-1}(\beta_XX+\beta_ZZ)$ the parameter for the Bernoulli distribution is set to $1-logit^{-1}(\beta_XX+\beta_ZZ)$. In the practical implementation, as some algorithms need to know the leniency of the decision, acceptance rate is then calculated then from the decisions.
One discussed way of making the decisions independent was to "flip a coin at some probability". An implementation of that idea is presented below in algorithm \ref{alg:decider:coinflip}. As $\pr(T=0|X, Z) = \invlogit(\beta_XX+\beta_ZZ)$ the parameter for the Bernoulli distribution is set to $1-\invlogit(\beta_XX+\beta_ZZ)$. In the practical implementation, as some algorithms need to know the leniency of the decision, acceptance rate is then calculated then from the decisions.
\begin{algorithm}[H] % enter the algorithm environment
\caption{Decider module: decisions from Bernoulli} % give the algorithm a caption
......@@ -789,16 +790,16 @@ One discussed way of making the decisions independent was to "flip a coin at som
\begin{algorithmic}[1] % enter the algorithmic environment
\REQUIRE Data with features $X, Z$, knowledge that both of them affect the outcome Y and that they are independent / Parameters: $\beta_X=1, \beta_Z=1$.
\ENSURE
\STATE Draw $t$ from Bernoulli$(1-logit^{-1}(\beta_Xx+\beta_Zz))$ for all observations.
\STATE Draw $t$ from Bernoulli$(1-\invlogit(\beta_Xx+\beta_Zz))$ for all observations.
\STATE Compute the acceptance rate.
\STATE Set $Y=$ NA if decision is negative ($T=0$). \emph{Optional.}
\STATE Set $Y=$ NA if decision is negative ($T=0$).
\RETURN data with decisions.
\end{algorithmic}
\end{algorithm}
A quantile-based decider module is presented in algorithm \ref{alg:decider:quantile}. The algorithm tries to emulate Lakkaraju's decision-maker while giving out independent decisions. The independence is achieved by comparing the values of $logit^{-1}(\beta_Xx+\beta_Zz)$ to the corresponding value of the inverse cumulative distribution function $F^{-1}_{logit^{-1}(\beta_XX+\beta_ZZ)}$ or $F^{-1}$ in short. The derivation of $F^{-1}$ is deferred to the next section. The decisions have a guarantee that the fraction of positive decisions will converge to $r$ based on the law of large numbers.
A quantile-based decider module is presented in algorithm \ref{alg:decider:quantile}. The algorithm tries to emulate Lakkaraju's decision-maker while giving out independent decisions. The independence is achieved by comparing the values of $\invlogit(\beta_Xx+\beta_Zz)$ to the corresponding value of the inverse cumulative distribution function $F^{-1}_{\invlogit(\beta_XX+\beta_ZZ)}$ or $F^{-1}$ in short. The derivation of $F^{-1}$ is deferred to the next section. %The decisions have a guarantee that the fraction of positive decisions will converge to $r$ based on the law of large numbers.
\textbf{Example} Consider a decision-maker with leniency 0.60 who gets a new subject $\{x, z\}$ with a predicted probability $logit^{-1}(\beta_Xx+\beta_Zz)\approx 0.7$ for a negative outcome with some coefficients $\beta$. Now, as the judge has leniency 0.6 their cut-point $F^{-1}(0.60)\approx0.65$. That is, the judge will not give a positive decision to anyone with failure probability greater than 0.65 so our example subject will receive a negative decision.
\textbf{Example} Consider a decision-maker with leniency 0.60 who gets a new subject $\{x, z\}$ with a predicted probability $\invlogit(\beta_Xx+\beta_Zz)\approx 0.7$ for a negative outcome with some coefficients $\beta$. Now, as the judge has leniency 0.6, their cut-point $F^{-1}(0.60)\approx0.65$. That is, the judge will not give a positive decision to anyone with failure probability greater than 0.65, so our example subject will receive a negative decision.
\begin{algorithm}[H] % enter the algorithm environment
\caption{Decider module: "quantile decisions"} % give the algorithm a caption
......@@ -808,9 +809,9 @@ A quantile-based decider module is presented in algorithm \ref{alg:decider:quant
\ENSURE
\STATE Sample acceptance rates for each M judges from Uniform$(0.1; 0.9)$ and round to tenth decimal place.
\STATE Assign each observation to a judge at random.
\STATE Calculate $\pr(T=0|X, Z) = logit^{-1}(\beta_XX+\beta_ZZ)$ for all observations.
\STATE If $logit^{-1}(\beta_Xx+\beta_Zz) \geq F^{-1}(r)$ set $t=0$, otherwise set $t=1$.
\STATE Set $Y=$ NA if decision is negative ($T=0$). \emph{Optional.}
\STATE Calculate $\pr(T=0|X, Z) = \invlogit(\beta_XX+\beta_ZZ)$ for all observations.
\STATE If $\invlogit(\beta_Xx+\beta_Zz) \geq F^{-1}(r)$ set $t=0$, otherwise set $t=1$.
\STATE Set $Y=$ NA if decision is negative ($T=0$).
\RETURN data with decisions.
\end{algorithmic}
\end{algorithm}
......@@ -846,7 +847,7 @@ Black box predictive model. Another input to our framework is a black box predic
True evaluation module computes the "true failure rate" of a predictive model \emph{had it been deployed to make independent decisions}. For computing the true failure rate "had the model been deployed" we need all outcome labels which is why the true failure rate can only be computed on synthetic data.
In practice, the module first trains a model $\B$ and assigns each observation with a probability score $\s$ using it as described above. Then the observations are sorted in ascending order by the scores so that most risky subjects are last (subjects with the highest predicted probability for a negative outcome). Taking the first $r \cdot 100\%$ of observations, the true failure rate can be computed straight from the ground truth.
In practice, the module first trains a model $\B$ and assigns each observation with a probability score $\s$ using it as described above. Then the observations are sorted in ascending order by the scores so that most risky subjects are last (subjects with the highest predicted probability for a negative outcome). Taking the first $r \cdot 100\%$ of observations, the true failure rate can be computed from the ground truth directly.
\begin{algorithm}[H] % enter the algorithm environment
\caption{Evaluator module: True evaluation} % give the algorithm a caption
......@@ -899,8 +900,7 @@ This [failure rate estimation for human decision-makers] can be done by grouping
\REQUIRE Data $\D$ with properties $\{j_i, t_i, y_i\}$, acceptance rate r
\ENSURE
\STATE Assign judges with similar acceptance rate to $\mathcal{J}$
\STATE $\D_{released} = \{(j, t, y) \in \D~|~t=1 \wedge j \in \mathcal{J}\}$
\STATE \hskip3.0em $\rhd$ Subjects judged \emph{and} released by judges with correct leniency.
\STATE Assign subjects judged and released by judges in $\mathcal{J}$ to $\D_{released}$.
\RETURN $\frac{1}{|\mathcal{J}|}\sum_{i=1}^{\D_{released}}\delta\{y_i=0\}$
\end{algorithmic}
\end{algorithm}
......@@ -946,21 +946,21 @@ in probabilities or just deterministically
\end{equation}
In equations \ref{eq:Tprob} and \ref{eq:Tdet}, $\pr(Y=0|x, z, DG)$ is the predicted probability of a negative outcome given x and z. The probability $\pr(Y=0|x, z, DG)$ is predicted by the judge and here we used an approximation that
\begin{equation}
\pr(Y=0|x, z, DG) = logit^{-1}(\beta_Xx+\beta_Zz)
\pr(Y=0|x, z, DG) = \invlogit(\beta_Xx+\beta_Zz)
\end{equation}
which is an increasing function of $z$ when $x$ is given. Now we do not know the $\beta$ coefficients so here we used the information that they are one. (In the future, they should be inferred.)
The inverse cumulative function $F^{-1}(r)$ in equations \ref{eq:Tprob} and \ref{eq:Tdet} is the inverse cumulative distribution of \emph{logit-normal distribution} with parameters $\mu=0$ and $\sigma^2=2$, i.e. $F^{-1}$ is the inverse cumulative distribution function of the sum of two standard Gaussians after logistic transformation. If $\beta_X \neq 1$ and/or $\beta_Z \neq 1$ then from the basic properties of variance $\sigma^2=Var(\beta_XX+\beta_ZZ)=\beta_X^2Var(X)+\beta_Z^2Var(Z)$. Finally the inverse cumulative function
\begin{equation}
F^{-1}(r) = logit^{-1}\left(\text{erf}^{-1}(2r-1)\sqrt{2\sigma^2}-\mu\right)
\begin{equation} \label{eq:cum_inv}
F^{-1}(r) = \invlogit\left(\text{erf}^{-1}(2r-1)\sqrt{2\sigma^2}-\mu\right)
\end{equation}
where the parameters are as discussed and erf is the error function.
With this knowledge, it can be stated that if we observed $T=0$ with some $x$ and $r$ it must have been that $logit^{-1}(\beta_Xx+\beta_Zz) \geq F^{-1}(r)$. Using basic algebra we obtain that
With this knowledge, it can be stated that if we observed $T=0$ with some $x$ and $r$ it must have been that $\invlogit(\beta_Xx+\beta_Zz) \geq F^{-1}(r)$. Using basic algebra we obtain that
\begin{equation} \label{eq:bounds}
logit^{-1}(x + z) \geq F^{-1}(r) \Leftrightarrow x+z \geq logit(F^{-1}(r)) \Leftrightarrow z \geq logit(F^{-1}(r)) - x
\invlogit(x + z) \geq F^{-1}(r) \Leftrightarrow x+z \geq logit(F^{-1}(r)) \Leftrightarrow z \geq logit(F^{-1}(r)) - x
\end{equation}
as the logit and its inverse are strictly increasing functions and hence preserve the order of magnitude for all pairs of values in their domains. From equations \ref{eq:posterior_Z}, \ref{eq:Tprob} and \ref{eq:bounds} we can conclude that $\pr(Z < logit^{-1}(F^{-1}(r)) - x | T=0, X=x, R=r) = 0$ and that elsewhere the distribution of Z follows a truncated Gaussian with a lower bound of $logit(F^{-1}(r)) - x$. The expectation of Z can be computed analytically. All this follows analogically for cases with $T=1$ with the changes of some inequalities.
as the logit and its inverse are strictly increasing functions and hence preserve the order of magnitude for all pairs of values in their domains. From equations \ref{eq:posterior_Z}, \ref{eq:Tprob} and \ref{eq:bounds} we can conclude that $\pr(Z < \invlogit(F^{-1}(r)) - x | T=0, X=x, R=r) = 0$ and that elsewhere the distribution of Z follows a truncated Gaussian with a lower bound of $logit(F^{-1}(r)) - x$. The expectation of Z can be computed analytically. All this follows analogically for cases with $T=1$ with the changes of some inequalities.
In practise, in lines 1--3 and 10--13 of algorithm \ref{alg:eval:mc} we do as in the True evaluation evaluator algorithm with the distinction that some of the values of Y are imputed with the corresponding counterfactual probabilities. In line 4 we compute the bounds as motivated above. In the for-loop (lines 5--8) we merely compute the expectation of Z given the knowledge of the decision and that the distribution of Z follows a truncated Gaussian. The equation
\begin{equation}
......@@ -980,7 +980,7 @@ computes the correct expectation automatically. Using the expectation, we then c
\STATE Compute bounds $Q_r = logit(F^{-1}(r)) - x$ for all judges.
\FORALL{observations in test set}
\STATE Compute expectation $\hat{z} = (1-t) \cdot E(Z | Z > Q_r) + t \cdot E(Z | Z < Q_r)$. %
\STATE Compute $\pr(Y(1) = 0) = logit^{-1}(x + \hat{z})$.
\STATE Compute $\pr(Y(1) = 0) = \invlogit(x + \hat{z})$.
\ENDFOR
\STATE Impute missing observations using the estimates $\pr(Y(1) = 0)$.
\STATE Sort the data by the probabilities $\s$ to ascending order.
......@@ -990,6 +990,44 @@ computes the correct expectation automatically. Using the expectation, we then c
\end{algorithmic}
\end{algorithm}
%Comments this approach:
%\begin{itemize}
%\item Propensity ($\pr(T=1| X, Z)$) is taken as given and in correct form. In reality it is not known (?)
%\item The equation for the inverse cdf \ref{eq:cum_inv} assumes the joint pdf of $\invlogit(\beta_XX+\beta_ZZ)$ known when in real data X might be multidimensional and non-normal etc.
%\item
%\end{itemize}
In the future, we should utilize a fully Bayesian approach to be able to include priors for the different $\beta$ coefficients into the model.
The following hierarchical model was used as an initial approach to the problem. Data was generated with unobservables and both outcome Y and decision T were drawn from Bernoulli distributions. The $\beta$ coefficients were systematically overestimated as shown in figure \ref{fig:posteriors}.
\begin{align} \label{eq1}
1-t~|~x,~z,~\beta_x,~\beta_z & \sim \text{Bernoulli}(\invlogit(\beta_xx + \beta_zz)) \\ \nonumber
Z &\sim N(0, 1) \\ \nonumber
% \alpha_j & \sim N(0, 100), j \in \{1, \ldots, N_{judges} \} \\
\beta_x & \sim N(0, 10^2) \\ \nonumber
\beta_z & \sim N_+(0, 10^2)
\end{align}
\begin{figure}[]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\includegraphics[width=\textwidth]{sl_posterior_betax}
\caption{Posterior of $\beta_x$.}
%\label{fig:random_predictions_without_Z}
\end{subfigure}
\quad %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc.
%(or a blank line to force the subfigure onto a new line)
\begin{subfigure}[b]{0.475\textwidth}
\includegraphics[width=\textwidth]{sl_posterior_betaz}
\caption{Posterior of $\beta_z$.}
\label{fig:posteriors}
\end{subfigure}
\caption{Coefficient posteriors from model \ref{eq1}.}
%\label{fig:random_predictions}
\end{figure}
\newpage
\subsection{Summary table}
......
figures/sl_diagnostic_bad_decider_with_Z.png

183 KiB

figures/sl_diagnostic_bernoulli_batch_with_Z.png

193 KiB | W: | H:

figures/sl_diagnostic_bernoulli_batch_with_Z.png

213 KiB | W: | H:

figures/sl_diagnostic_bernoulli_batch_with_Z.png
figures/sl_diagnostic_bernoulli_batch_with_Z.png
figures/sl_diagnostic_bernoulli_batch_with_Z.png
figures/sl_diagnostic_bernoulli_batch_with_Z.png
  • 2-up
  • Swipe
  • Onion skin
figures/sl_diagnostic_bernoulli_independent_with_Z.png

183 KiB | W: | H:

figures/sl_diagnostic_bernoulli_independent_with_Z.png

209 KiB | W: | H:

figures/sl_diagnostic_bernoulli_independent_with_Z.png
figures/sl_diagnostic_bernoulli_independent_with_Z.png
figures/sl_diagnostic_bernoulli_independent_with_Z.png
figures/sl_diagnostic_bernoulli_independent_with_Z.png
  • 2-up
  • Swipe
  • Onion skin
figures/sl_diagnostic_bernoulli_independent_without_Z.png

188 KiB | W: | H:

figures/sl_diagnostic_bernoulli_independent_without_Z.png

214 KiB | W: | H:

figures/sl_diagnostic_bernoulli_independent_without_Z.png
figures/sl_diagnostic_bernoulli_independent_without_Z.png
figures/sl_diagnostic_bernoulli_independent_without_Z.png
figures/sl_diagnostic_bernoulli_independent_without_Z.png
  • 2-up
  • Swipe
  • Onion skin
figures/sl_diagnostic_biased_decider_with_Z.png

183 KiB

figures/sl_diagnostic_random_decider_with_Z.png

184 KiB

figures/sl_diagnostic_threshold_batch_with_Z.png

182 KiB | W: | H:

figures/sl_diagnostic_threshold_batch_with_Z.png

210 KiB | W: | H:

figures/sl_diagnostic_threshold_batch_with_Z.png
figures/sl_diagnostic_threshold_batch_with_Z.png
figures/sl_diagnostic_threshold_batch_with_Z.png
figures/sl_diagnostic_threshold_batch_with_Z.png
  • 2-up
  • Swipe
  • Onion skin
figures/sl_diagnostic_threshold_independent_with_Z.png

177 KiB | W: | H:

figures/sl_diagnostic_threshold_independent_with_Z.png

200 KiB | W: | H:

figures/sl_diagnostic_threshold_independent_with_Z.png
figures/sl_diagnostic_threshold_independent_with_Z.png
figures/sl_diagnostic_threshold_independent_with_Z.png
figures/sl_diagnostic_threshold_independent_with_Z.png
  • 2-up
  • Swipe
  • Onion skin
figures/sl_posterior_betax.png

9.78 KiB

figures/sl_posterior_betaz.png

8.74 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment