Haptic shared control is a promising approach to improve tele-manipulated task execution, by making safe and effective control actions tangible through guidance forces. In current research, these guidance forces are most often generated based on pre-generated, errorless models of the remote environment. Hence such guidance forces are exempt from the inaccuracies that can be expected in practical implementations. The goal of this research is to quantify the extent to which task execution is degraded by inaccuracies in the model on which haptic guidance forces are based. In a human-in-the-loop experiment, subjects (n = 14) performed a realistic tele-manipulated assembly task in a virtual environment. Operators were provided with various levels of haptic guidance, namely no haptic guidance (conventional tele-manipulation), haptic guidance without inaccuracies, and haptic guidance with translational inaccuracies (one large inaccuracy, in the order of magnitude of the task, and a second smaller inaccuracy). The quality of natural haptic feedback (i.e., haptic transparency) was varied between high and low to identify the operator’s ability to detect and cope with inaccuracies in haptic guidance. The results indicate that haptic guidance is beneficial for task execution when no inaccuracies are present in the guidance. When inaccuracies are present, this may degrade task execution, depending on the magnitude and the direction of the inaccuracy. The effect of inaccuracies on overall task performance is dominated by effects found for the Constrained Translational Movement, due to its potential for jamming. No evidence was found that a higher quality of haptic transparency helps operators to detect and cope with inaccuracies in the haptic guidance.