Kernel method in machine learning consists of encoding input data into a vector in a Hilbert space called the feature space and modeling the target function as a linear map on the feature space. Given a cost function, computing such an optimal linear map requires computation of a kernel matrix whose entries equal the inner products of feature vectors. In the quantum kernel method it is assumed that the feature vectors are quantum states in which case the quantum kernel matrix is given in terms of the overlap of quantum states. In practice, to estimate entries of the quantum kernel matrix one should apply, e.g., the SWAP-test and the number of such SWAP-tests is a relevant parameter in evaluating the performance of the quantum kernel method. Moreover, quantum systems are subject to noise, so the quantum states as feature vectors cannot be prepared exactly and this is another source of error in the computation of the quantum kernel matrix. Taking both the above considerations into account, we prove a bound on the performance (generalization error) of the quantum kernel method.