Recursive adaptive filtering methods are often used for solving the problem of simultaneous state and parameters estimation arising in many areas of research. The gradient-based schemes for adaptive Kalman filtering (KF) require the corresponding filter sensitivity computations. The standard approach is based on the direct differentiation of the KF equations. The shortcoming of this strategy is a numerical instability of the conventional KF (and its derivatives) with respect to roundoff errors. For decades, special attention has been paid in the KF community for designing efficient filter implementations that improve robustness of the estimator against roundoff. The most popular and beneficial techniques are found in the class of square-root (SR) or UD factorization-based methods. They imply the Cholesky decomposition of the corresponding error covariance matrix. Another important matrix factorization method is the singular value decomposition (SVD) and, hence, further encouraging KF algorithms might be found under this approach. Meanwhile, the filter sensitivity computation heavily relies on the use of matrix differential calculus. Previous works on the robust KF derivative computation have produced the SR- and UD-based methodologies. Alternatively, in this paper we design the SVD-based approach. The solution is expressed in terms of the SVD-based KF covariance quantities and their derivatives (with respect to unknown system parameters). The results of numerical experiments illustrate that although the newly-developed SVD-based method is algebraically equivalent to the conventional approach and the previously derived SRand UD-based strategies, it outperforms the mentioned techniques for estimation accuracy in ill-conditioned situations.