BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20250824T112345EDT-5165jzvhTz@132.216.98.100 DTSTAMP:20250824T152345Z DESCRIPTION:Depth-Adaptive Neural Networks from the Optimal Control viewpoi nt\n\nAbstract:\n In recent years\, deep learning has been connected with o ptimal control as a way to define a notion of a continuous underlying lear ning problem. In this view\, neural networks can be interpreted as a discr etization of a parametric Ordinary Differential Equation which\, in the li mit\, defines a continuous-depth neural network. The learning task then co nsists in finding the best ODE parameters for the problem under considerat ion\, and their number increases with the accuracy of the time discretizat ion. Although important steps have been taken to realize the advantages of such continuous formulations\, most current learning techniques fix a dis cretization (i.e.~the number of layers is fixed). In this work\, we propos e an iterative adaptive algorithm where we progressively refine the time d iscretization (i.e.~we increase the number of layers). Provided that certa in tolerances are met across the iterations\, we prove that the strategy c onverges to the underlying continuous problem. One salient advantage of su ch a shallow-to-deep approach is that it helps to benefit in practice from the higher approximation properties of deep networks by mitigating over-p arametrization issues. The performance of the approach is illustrated in s everal numerical examples.\n\nFor Zoom meeting information please contact tim.hoheisel [at] mcgill.ca\n DTSTART:20210322T200000Z DTEND:20210322T210000Z SUMMARY:Olga Mula (Paris Dauphine) URL:/mathstat/channels/event/olga-mula-paris-dauphine- 329615 END:VEVENT END:VCALENDAR