This idea is also at the core of the Finite Element Method (and other Galerkin methods, such as DG) which are probably the most widespread methods to approximate the solutions of partial differential equations (PDEs) used everywhere in physics and engineering.
Basically, define a finite dimensional function space V_N of dimension N, in such a way that you could grow N to be arbitrarily large. Solve not the PDE (originally defined over an infinite dimensional function space V, such as H^1), but its discretization as if it dealt only with functions in V_N rather than all functions. The PDE is then simply a linear system, easy to solve. And you can prove, for instance in the case of elliptic PDEs, that the solution to the discrete problem is the orthogonal projection of the true solution of the PDE (in V) onto V_N (Céa's Lemma). Finally, you can produce estimations of the error this projection incurs as a function of N, and thus give theoretical guarantees that the algorithm converges to the true solution as N goes to infinity.
(N in this case is the number of vertices in a mesh that is used to define the basis functions of V_N)
Basically, define a finite dimensional function space V_N of dimension N, in such a way that you could grow N to be arbitrarily large. Solve not the PDE (originally defined over an infinite dimensional function space V, such as H^1), but its discretization as if it dealt only with functions in V_N rather than all functions. The PDE is then simply a linear system, easy to solve. And you can prove, for instance in the case of elliptic PDEs, that the solution to the discrete problem is the orthogonal projection of the true solution of the PDE (in V) onto V_N (Céa's Lemma). Finally, you can produce estimations of the error this projection incurs as a function of N, and thus give theoretical guarantees that the algorithm converges to the true solution as N goes to infinity.
(N in this case is the number of vertices in a mesh that is used to define the basis functions of V_N)