This paper examines the challenge to transparency that deep learning models used in recommender systems raise for ethical nudging. In particular, I will show how a lack of transparency may make it difficult to assess whether the nudges 1) make people better-off as judged by themselves and 2) do so without unduly interfering with their autonomy.