Our new pre-print investivates how to learn to make optimal decisions with uncertain evidence that needs to be accumulated over time and is encoded in neural populations. Diffusion models, which have been the go-to models for such decisions, commonly assume the evidence to be one-dimensional rather than to be distributed in a population, and the format of this evidence is usually known. We instead assume a distributed representation of this evidence whose format needs to be learned, and derive near-optimal learning rules for this, more natural, case. These learning rules turn out to include decision confidence as a major modulator of the learning rate, thus establishing a computational role of decision confidence. The pre-print can be found here: “Learning optimal decisions with confidence”.