I am trying to solve the equation below numerically for various functions $I(x)$:
$$ \frac{\partial^2}{\partial x^2} G(x) + I(x)G(x) = 0 $$
Subject to the boundary contitions:
$$ \frac{\partial}{\partial x} G(x)\Big|_{x=0} = C_1 $$ $$ \frac{\partial}{\partial x} G(x)\Big|_{x=L} = C_2 $$ $$ \frac{\partial}{\partial x} G(x)\Big|_{x=x_0+\epsilon} - \frac{\partial}{\partial x} G(x)\Big|_{x=x_0-\epsilon} = 1 $$
Where $x_0 \in [0,L]$, $\epsilon\to 0$, and $C_1$ and $C_2$ are given.
Normally, if there were no boundary conditions, I would discretize $[0,L]$ into $N$ individual units, then I would approximate $\frac{\partial^2}{\partial x^2}$ using the finite difference method yielding a $N\times N$ matrix. I would then add the discretized version of $I(x)$ as a matrix by multiplying by the identity $N\times N$ matrix, yielding:
$$ \left( \begin{bmatrix} -1 & 1 & 0 & \dots & 0 & 0 \\ 1 & -2 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & -2 & 1 \\ 0 & 0 & 0 & \dots & 1 & -1 \end{bmatrix} + \mathbb{1} \cdot \begin{bmatrix} I(x_1) \\ I(x_2) \\ \vdots \\ I(x_N) \\ \end{bmatrix} \right) \begin{bmatrix} G_0 \\ G_1 \\ \vdots \\ G_{N-1} \\ G_N \\ \end{bmatrix} = 0 $$
Then finding $G$ is a matter of inverting the matrix on the left hand side.
My question: how do I incorporate the boundary conditions into he expression above? Should the derivative matrix (first matrix) change, or should the right hand side become a vector other than $0$? Is there a good reference for learning about how to numerically solve PDEs with weird boundary conditions?