A Model of the XOR Function

If batch gradient descent is used to find the weights of a neural network modeling the XOR function, a local minimum may be obtained instead of the exact solution.

The XOR function is defined such that given any two binary values, if exactly one of these values is equal to 11, the XOR function returns 11; otherwise, it returns 00. Let the matrix X\boldsymbol{X} carry all possible values of the two binary variables, and let the vector y\boldsymbol{y} contain the respected outputs of the vectors of X\boldsymbol{X}. Let (W,w)(\boldsymbol{W}, \boldsymbol{w}) be the weights and (c,b)(\boldsymbol{c}, b) be the biases of the network in the first and second layers, respectively. Finally, let the vector y^\hat{\boldsymbol{y}} contain the network’s predicted outcomes given binary values in X\boldsymbol{X}. I would like to find the weights w,W,c\boldsymbol{w}, \boldsymbol{W}, \boldsymbol{c}, and bb of the network y^=f(X;W,c,w,b)=max{0,XW+[cc]}w+[bb] \hat{\boldsymbol{y}} =f(\boldsymbol{X} ; \boldsymbol{W}, \boldsymbol{c}, \boldsymbol{w}, b) =\max \left\{0, \boldsymbol{X} \boldsymbol{W}+\left[\begin{array}{ccc}\boldsymbol{c} & \ldots & \boldsymbol{c}\end{array}\right]^{\top}\right\} \boldsymbol{w}+\left[\begin{array}{lll}b & \ldots & b\end{array}\right]^{\top} in order to represent the XOR function.

I now define some terms to simplify the following analysis. Define A(1)=XW+[cc]\boldsymbol{A}^{(1)}=\boldsymbol{X} \boldsymbol{W}+\left[\begin{array}{lll}\boldsymbol{c} & \ldots & \boldsymbol{c}\end{array}\right]^{\top}, H(1)=max{0,A(1)}\boldsymbol{H}^{(1)}= \max\left\{0, \boldsymbol{A}^{(1)}\right\}, and a(2)=y^=H(1)w+[bb]\boldsymbol{a}^{(2)}=\hat{\boldsymbol{y}}=\boldsymbol{H}^{(1)} \boldsymbol{w}+\left[\begin{array}{lll}b & \ldots & b\end{array}\right]^{\top}. Define qq to be the element-wise function max\max (i.e. element-wise function means that the same function is applied to each component of a tensor to obtain a tensor). Note that in general, qq need not be specifically max. It is important to note that the derivative of the ReLU function max{0,x}\max \{0, x\} is 0 if x<0x<0 and 1 if x>0x>0. Although not strictly true, if x=0x=0, then the derivative of the ReLU function is 00 in this model. Assume that the loss function is the squared error J(y^)=y^y2=(y^y)(y^y)J(\hat{y})= \|\hat{\boldsymbol{y}}-\boldsymbol{y}\|^{2}=(\hat{\boldsymbol{y}}-\boldsymbol{y})^{\top}(\hat{\boldsymbol{y}}-\boldsymbol{y}).

Backpropagation Analysis

The gradient on the output layer is y^J=a(2)J=2(y^y)\nabla_{\hat{\boldsymbol{y}}} J=\nabla_{\boldsymbol{a}^{(2)}} J=2(\hat{\boldsymbol{y}}-\boldsymbol{y}). Using the chain rule, I need to compute the second layer’s gradients dJdb=(a(2)J)1\frac{d J}{d b}=\left(\nabla_{\boldsymbol{a}^{(2)}} J \right)^{\top} \mathbf{1} and wJ=(a(2)w)a(2)J=(H(1))a(2)J\nabla_{\boldsymbol{w}} J=\left(\frac{\partial \boldsymbol{a}^{(2)}}{\partial \boldsymbol{w}}\right)^{\top} \nabla_{\boldsymbol{a}^{(2)}} J=\left(\boldsymbol{H}^{(1)}\right)^{\top} \nabla_{\boldsymbol{a}^{(2)}} J in order to adjust the weights bb and w\boldsymbol{w}. Likewise, I need to compute the first layer’s gradients cJ=(2((y^y)w)q(A(1)))1\nabla_{\boldsymbol{c}} J=\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)^{\top} \mathbf{1} and JW=X(2((y^y)w)q(A(1)))\frac{\partial J}{\partial \boldsymbol{W}}=\boldsymbol{X}^{\top}\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right) to adjust the weights c\boldsymbol{c} and W\boldsymbol{W}.

Suppose that the initialized weights are as follows and the learning rate is 0.010.01. W=[0.10.20.30.4],c=[0.10.2],w=[0.50.6],b=0.3 \boldsymbol{W}=\left[\begin{array}{ll} 0.1 & 0.2 \\ 0.3 & 0.4 \end{array}\right], \boldsymbol{c}=\left[\begin{array}{c} 0.1 \\ 0.2 \end{array}\right], \boldsymbol{w}=\left[\begin{array}{c} 0.5 \\ -0.6 \end{array}\right], b=-0.3

In computing the results of the network by using all of the training data and the initialized weights, y^=\hat{\boldsymbol{y}}=

[0.370.460.440.53]\left[\begin{array}{llll}-0.37 & -0.46 & -0.44 & -0.53\end{array}\right]^{\top} when X=[00110101]\boldsymbol{X}=\left[\begin{array}{llll}0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1\end{array}\right]^{\top}, yielding a total loss or error of 4.623.

After running gradient descent for 1000 iterations, the approximate weights I obtain are as follows.

W=[0.811.060.811.06],c=[0.091.06],w=[1.211.86],b=0.11 \boldsymbol{W}=\left[\begin{array}{ll} 0.81 & 1.06 \\ 0.81 & 1.06 \end{array}\right], \boldsymbol{c}=\left[\begin{array}{c} 0.09 \\ -1.06 \end{array}\right], \boldsymbol{w}=\left[\begin{array}{c} 1.21 \\ -1.86 \end{array}\right], b=-0.11

The total loss is about 0.0001, using these weights. If I increase the number of iterations, the total loss decreases even more. If I change the initial weights (just slightly) by changing all negative components to positive (i.e. w=[0.50.6]\boldsymbol{w}=\left[\begin{array}{l}0.5 \\ 0.6\end{array}\right] and b=0.3b=0.3), I obtain a higher total loss. In both cases, I am stuck at a local minimum (albeit the former case better than the latter). This is because the exact weights are as follows.

W=[1111],c=[01],w=[12],b=0 \boldsymbol{W}=\left[\begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right], \boldsymbol{c}=\left[\begin{array}{c} 0 \\ -1 \end{array}\right], \boldsymbol{w}=\left[\begin{array}{c} 1 \\ -2 \end{array}\right], b=0

The issue of being stuck at a local minimum because of the initial choice of weights is an extremely big problem in the field of deep learning. Essentially, an important, and already discovered result is that even if the architecture of the neural network has the potential to correctly predict some outcome, finding the weights of the architecture may not be so easy. I may attempt to address this issue in the future.

To see the mathematica code that I wrote for the backpropagation analysis, see here.

Appendix

Proposition.\textbf{Proposition.} cJ=(2((y^y)w)q(A(1)))1 and JW=X(2((y^y)w)q(A(1))) \nabla_{\boldsymbol{c}} J=\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)^{\top} \mathbf{1} \text { and } \frac{\partial J}{\partial \boldsymbol{W}}=\boldsymbol{X}^{\top}\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)

Proof.\textit{Proof.} In order to find cJ\nabla_{\boldsymbol{c}} J, consider the following. By definition of chain rule, we have that

JH(1)=jaj(2)H(1)Jaj(2)=[w1wn0000]2(a1(2)y1)++[0000w1wn]2(an(2)yn)=2[a1(2)y1an(2)yn]w=2(y^y)w.(1) \begin{aligned} \frac{\partial J}{\partial \boldsymbol{H}^{(1)}} &= \sum_{j} \frac{\partial a_{j}^{(2)}}{\partial \boldsymbol{H}^{(1)}} \frac{\partial J}{\partial a_{j}^{(2)}} \\ &= \left[\begin{array}{ccc} w_{1} & \ldots & w_{n} \\ 0 & \ldots & 0 \\ 0 & \ldots & 0 \end{array}\right] 2\left(a_{1}^{(2)}-y_{1}\right)+\cdots+\left[\begin{array}{ccc} 0 & \ldots & 0 \\ 0 & \ldots & 0 \\ w_{1} & \ldots & w_{n} \end{array}\right] 2\left(a_{n}^{(2)}-y_{n}\right) \\ \quad&=2\left[\begin{array}{c} a_{1}^{(2)}-y_{1} \\ a_{n}^{(2)}-y_{n} \end{array}\right] \boldsymbol{w}^{\top}=2(\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}. \tag{1} \end{aligned} We also have that H(1)=q(A(1))=[q(A11(1))q(A1n(1))q(Am1(1))q(Amn(1))] \boldsymbol{H}^{(1)}=q\left(\boldsymbol{A}^{(1)}\right) = \left[\begin{array}{ccc}q\left(A_{11}^{(1)}\right) & \ldots & q\left(A_{1 n}^{(1)}\right) \\ \ldots & \ldots & \ldots \\ q\left(A_{m 1}^{(1)}\right) & \ldots & q\left(A_{m n}^{(1)}\right)\end{array}\right] and (Hj(1)A(1))k=Hj(1)Ak(1)={q(Aj(1)) if j=k0 otherwise } \left(\frac{\partial H_{j}^{(1)}}{\partial \boldsymbol{A}^{(1)}}\right)_{k} = \frac{\partial H_{j}^{(1)}}{\partial A_{k}^{(1)}} = \left\{\begin{array}{c} q^{\prime}\left(A_{j}^{(1)}\right) \text { if } j=k \\ 0 \text { otherwise } \end{array}\right\} implying that JA(1)=jHj(1)A(1)JHj(1)=j[q(Aj(1))]2((y^y)w)j=2((y^y)w)11[q(A11(1))00]++2((y^y)w)mn[00q(Amn(1))]=2[((y^y)w)11q(A11(1))((y^y)w)1nq(A1n(1))((y^y)w)m1q(Am1(1))((y^y)w)mnq(Amn(1))]=2((y^y)w)q(A(1)).(2) \begin{aligned} \frac{\partial J}{\partial \boldsymbol{A}^{(1)}}&=\sum_{j} \frac{\partial H_{j}^{(1)}}{\partial \boldsymbol{A}^{(1)}} \frac{\partial J}{\partial H_{j}^{(1)}} \\ &=\sum_{j}\left[\begin{array}{ccc} \cdots & \cdots \\ \ldots & q^{\prime}\left(A_{j}^{(1)}\right) & \cdots \\ \ldots & \ldots \end{array}\right] 2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{j} \\ &=2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{11}\left[\begin{array}{ccc} q^{\prime}\left(A_{11}^{(1)}\right) & 0 \ldots & \ldots \\ 0 & \ldots & \ldots \\ \ldots & \ldots & \ldots \end{array}\right]+\cdots+2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{m n}\left[\begin{array}{ccc} \cdots & \cdots & \cdots \\ \cdots & \cdots & \ldots 0 \\ \cdots & \ldots 0 & q^{\prime}\left(A_{m n}^{(1)}\right) \end{array}\right] \\ &=2\left[\begin{array}{ccc} \left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{11} q^{\prime}\left(A_{11}^{(1)}\right) & \ldots & \left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{1 n} q^{\prime}\left(A_{1 n}^{(1)}\right) \\ \ldots & \ldots & \ldots \\ \left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{m 1} q^{\prime}\left(A_{m 1}^{(1)}\right) & \ldots & \left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right)_{m n} q^{\prime}\left(A_{m n}^{(1)}\right) \end{array}\right] \\ & =2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right). \tag{2} \end{aligned}

Hence, cJ=jAj(1)cJAj(1)=[A11(1)c1A11(1)cn](2((y^y)w)q(A(1)))11+=[10](2((y^y)w)q(A(1)))11++[01](2((y^y)w)q(A(1)))1n++[10](2((y^y)w)q(A(1)))m1++[01](2((y^y)w)q(A(1)))mn=[(2((y^y)w)q(A(1)))11++(2((y^y)w)q(A(1)))m1(2((y^y)w)q(A(1)))1n++(2((y^y)w)q(A(1)))mn]=(2((y^y)w)q(A(1)))1.(3) \begin{aligned} \nabla_c J&=\sum_j \frac{\partial A_j^{(1)}}{\partial c} \frac{\partial J}{\partial A_j^{(1)}} \\ &=\left[\begin{array}{c}\frac{\partial A_{11}^{(1)}}{\partial c_1} \\ \cdots \\ \frac{\partial A_{11}^{(1)}}{\partial c_n}\end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{11}+\cdots \\ &=\left[\begin{array}{c} 1 \\ \cdots \\ 0 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{11}+\cdots+\left[\begin{array}{c} 0 \\ \cdots \\ 1 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{1 n}+\cdots \\ &+\left[\begin{array}{c} 1 \\ \ldots \\ 0 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m 1}+\cdots+\left[\begin{array}{c} 0 \\ \ldots \\ 1 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m n} \\ & =\left[\begin{array}{l} \left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{11}+\cdots+\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m 1} \\ \left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{1 n}+\cdots+\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m n} \end{array}\right] \\ & =\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)^{\top} \mathbf{1}. \tag{3} \end{aligned}

For JW\frac{\partial J}{\partial \boldsymbol{W}}, we have that JW=jAj(1)WJAj(1)=[X11000X1n00](2((y^y)w)q(A(1)))11++[Xm1000Xmn00](2((y^y)w)q(A(1)))m1++[00X11000X1n](2((y^y)w)q(A(1)))1n++[00Xm1000Xmn](2((y^y)w)q(A(1)))mn=[X11Q11++Xm1Qm1X11Q1n++Xm1QmnXm1Q11++XmnQm1X1nQ1n++XmnQmn]=XQ(4) \begin{aligned} & \frac{\partial J}{\partial \boldsymbol{W}}=\sum_{j} \frac{\partial A_{j}^{(1)}}{\partial \boldsymbol{W}} \frac{\partial J}{\partial A_{j}^{(1)}} \\ & =\left[\begin{array}{ccc} X_{11} & 0 \ldots & 0 \\ \ldots & \ldots & 0 \\ X_{1 n} & 0 \ldots & 0 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{11}+\cdots \\ & +\left[\begin{array}{ccc} X_{m 1} & 0 \ldots & 0 \\ \ldots & \ldots & 0 \\ X_{m n} & 0 \ldots & 0 \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m 1}+\cdots \\ & +\left[\begin{array}{ccc} 0 & \ldots 0 & X_{11} \\ 0 & \ldots & \ldots \\ 0 & \ldots 0 & X_{1 n} \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{1 n}+\cdots \\ & +\left[\begin{array}{ccc} 0 & \ldots 0 & X_{m 1} \\ 0 & \ldots & \ldots \\ 0 & \ldots 0 & X_{m n} \end{array}\right]\left(2\left((\hat{\boldsymbol{y}}-\boldsymbol{y}) \boldsymbol{w}^{\top}\right) \odot q^{\prime}\left(\boldsymbol{A}^{(1)}\right)\right)_{m n} \\ & =\left[\begin{array}{ccc} X_{11} Q_{11}+\cdots+X_{m 1} Q_{m 1} & \cdots & X_{11} Q_{1 n}+\cdots+X_{m 1} Q_{m n} \\ \cdots & \cdots & \cdots \\ X_{m 1} Q_{11}+\cdots+X_{m n} Q_{m 1} & \cdots & X_{1 n} Q_{1 n}+\cdots+X_{m n} Q_{m n} \\ \end{array}\right] \\ & =\boldsymbol{X}^{\top} \boldsymbol{Q} \tag{4} \end{aligned}

where X=[X11X1nXm1Xmn]\boldsymbol{X}=\left[\begin{array}{ccc}X_{11} & \ldots & X_{1 n} \\ \ldots & \ldots & \ldots \\ X_{m 1} & \ldots & X_{m n}\end{array}\right] and Q=JA(1)\boldsymbol{Q}=\frac{\partial J}{\partial \boldsymbol{A}^{(1)}}.

Proposition.\textbf{Proposition.} In some models, there is also a function applied to a(2)\boldsymbol{a}^{(2)}, denoted as h(2)=q(a(2))\boldsymbol{h}^{(2)}=q\left(\boldsymbol{a}^{(2)}\right) (also known as the predicted value). Hence, in such models, a(2)J=q(a(2))h(2)J\nabla_{\boldsymbol{a}^{(2)}} J=q^{\prime}\left(\boldsymbol{a}^{(2)}\right) \odot \nabla_{\boldsymbol{h}^{(2)}} J.

Proof.\textit{Proof.} We know JJ is a function of h(2)\boldsymbol{h}^{(2)} and h(2)\boldsymbol{h}^{(2)} is a function of a(2)\boldsymbol{a}^{(2)}. By definition,

a(2)J=(h(2)a(2))h(2)J. \nabla_{\boldsymbol{a}^{(2)}} J=\left(\frac{\partial \boldsymbol{h}^{(2)}}{\partial \boldsymbol{a}^{(2)}}\right)^{\top} \nabla_{\boldsymbol{h}^{(2)}} J.

But we know that h(2)=q(a(2))=[q(a1(2))q(an(2))]\boldsymbol{h}^{(2)}=q\left(\boldsymbol{a}^{(2)}\right)=\left[\begin{array}{c}q\left(a_{1}^{(2)}\right) \\ \ldots \\ q\left(a_{n}^{(2)}\right)\end{array}\right]. Hence, h(2)a(2)=[q(a1(2))000000q(an(2))]\frac{\partial \boldsymbol{h}^{(2)}}{\partial \boldsymbol{a}^{(2)}}=\left[\begin{array}{ccc}q^{\prime}\left(a_{1}^{(2)}\right) & 0 \ldots & 0 \\ 0 \ldots & \ldots & \ldots 0 \\ 0 & \ldots 0 & q^{\prime}\left(a_{n}^{(2)}\right)\end{array}\right]. So,

(h(2)a(2))h(2)J=[q(a1(2))(h(2)J)1q(an(2))(h(2)J)n]=q(a(2))h(2)J. \left(\frac{\partial \boldsymbol{h}^{(2)}}{\partial \boldsymbol{a}^{(2)}}\right)^{\top} \nabla_{\boldsymbol{h}^{(2)}} J =\left[\begin{array}{c} q^{\prime}\left(a_1^{(2)}\right)\left(\nabla_{\boldsymbol{h}^{(2)}} J\right)_1 \\ \cdots \\ q^{\prime}\left(a_n^{(2)}\right)\left(\nabla_{\boldsymbol{h}^{(2)}} J\right)_n \end{array}\right] =q^{\prime}\left(\boldsymbol{a}^{(2)}\right) \odot \nabla_{\boldsymbol{h}^{(2)} J}.

This proof is relevant because it establishes the relationship between the Jacobian and the Hadamard product when using the chain rule.