reinforcement learning
Terms related to the query
Term Similarity Weight in brain map N In query
reward
0.04062144302109046
1.0
2823 In expansion
striatum
0.005904336846841872
0.09250583207318654
3024 In expansion
putamen
0.0018019196067676163
0.03518359559763759
3208 In expansion
visual
0.00047113936387511404
0.02444533722314772
10448 In expansion
task
0.0020581969535633433
0.022916499650374244
12194 In expansion
motor
0.0003764132319898982
0.019766213683330862
7928 In expansion
thalamus
0.0002828440840066709
0.015090126043141552
4891 In expansion
hippocampus
0.00025986226060253014
0.014712196491175838
4610 In expansion
basal ganglia
0.002101238705417096
0.013353564601245609
2555 In expansion
cerebellum
0.00020929269854767787
0.013050998666425648
5578 In expansion
ganglia
0.002053652270652057
0.011872342263858863
2581 In expansion
motion
0.0002725316854899672
0.011708775051100741
9061 In expansion
frontal
0.0005011645049832831
0.011542971737463532
11471 In expansion
ventral striatum
0.00325406163900286
0.0111418241924897
1418 In expansion
striatal
0.002638818065717975
0.009627896894061103
2076 In expansion
prefrontal
0.0002939050302029003
0.009565032106821832
10017 In expansion
parietal
0.00013565283527077363
0.009478497831097114
10159 In expansion
caudate
0.0005747017498902475
0.009460382217856122
3615 In expansion
cingulate
0.0001978098751908114
0.009458146512690982
9501 In expansion
left
0.0003938003241078219
0.0067768348572375065
12782 In expansion
acc
0.0003445583398142549
0.00563139638226864
3579 In expansion
auditory
8.872987866985699e-05
0.005586929735224396
4009 In expansion
movement
0.0003949311988284504
0.005335203728953692
8260 In expansion
orbitofrontal
0.0002577441799528481
0.005131250649611938
3508 In expansion
insula
6.78738350567239e-05
0.004650070214693844
7050 In expansion
ofc
0.0003032744874648406
0.0045972366083398535
1511 In expansion
amygdala
8.72364487570742e-05
0.004380796869949683
4540 In expansion
finger
0.0002788467651451434
0.004175628510799222
3165 In expansion
midbrain
0.002349911125289478
0.003945930579739573
1194 In expansion
temporal
0.00010832912184979777
0.00389014340264494
11897 In expansion
anterior
0.0002845449619655786
0.003765918795392239
11200 In expansion
right
0.0004707384985739332
0.0033289170088544365
13076 In expansion
basal
0.0017025495739754841
0.00312045578730936
3007 In expansion
somatosensory
7.989148809454434e-05
0.002970680148704929
2370 In expansion
word
0.00019923173016495293
0.002821481082234788
6479 In expansion
memory
0.0004345620967310702
0.0027401614368350617
8405 In expansion
hand
9.444611503256336e-05
0.002572149705678411
7603 In expansion
ffa
0.00010042187755829514
0.0023104468386513878
407 In expansion
anterior insula
0.0005656280881421274
0.0022041320107998572
2682 In expansion
fa
6.793537527152519e-05
0.002191684727974076
1796 In expansion
performance monitoring
0.004315280678606148
0.0
363 In expansion
performance
0.0073258308400809755
0.0
9105 In expansion
prediction
0.018061220640907243
0.0
4732 In expansion
prediction error
0.016744422673643164
0.0
448 In expansion
probability
0.005671701545140227
0.0
4940 In expansion
motor learning
0.007144483017051035
0.0
401 In expansion
model
0.0037111372117404158
0.0
12141 In expansion
motor sequence learning
0.004080310295277771
0.0
120 In expansion
monetary
0.00554562795762276
0.0
1212 In expansion
monitoring
0.005553643227036992
0.0
4652 In expansion
making
0.010352524397660682
0.0
6164 In expansion
outcome
0.008716183912427258
0.0
4269 In expansion
trial
0.006622088758994996
0.0
7031 In expansion
uncertainty
0.0036930843177384962
0.0
1589 In expansion
reward processing
0.004634449264307872
0.0
921 In expansion
reversal
0.003718267075672181
0.0
631 In expansion
reversal learning
0.0034724011018156553
0.0
174 In expansion
response
0.0037290157089808987
0.0
12421 In expansion
reinforcement
0.010026778451911297
0.0
810 In expansion
reinforcement learning
1.0
0.0
290 In query
related
0.004386236108147562
0.0
12943 In expansion
skill
0.003227304781040468
0.0
2168 In expansion
sequence
0.006956425859414969
0.0
10577 In expansion
sequence learning
0.0073509439037682854
0.0
176 In expansion
choice
0.00460348829873759
0.0
3444 In expansion
decision
0.01840721729182947
0.0
4783 In expansion
decision making
0.0161726544886673
0.0
2389 In expansion
correct
0.0037043680798237036
0.0
11851 In expansion
behavior
0.0039148145248741354
0.0
9967 In expansion
awareness
0.004714535160289067
0.0
2190 In expansion
implicit
0.005082204715332598
0.0
2063 In expansion
implicit learning
0.003244581862193039
0.0
169 In expansion
learned
0.004857706297808037
0.0
2048 In expansion
learning
0.07618675307810403
0.0
5239 In expansion
error
0.03360815434212196
0.0
8304 In expansion
feedback
0.026517728774032464
0.0
3189 In expansion
feedback processing
0.0036904343120559274
0.0
159 In expansion
Predicted distribution of activations in the literature
Download map
Publications related to the query
Reinforcement Learning in Multidimensional Environments Relies on Attention Mechanisms
Prediction error in reinforcement learning: A meta-analysis of neuroimaging studies
Individual Differences in Reinforcement Learning: Behavioral, Electrophysiological, and Neuroimaging Correlates
Striatal activations signal prediction errors on confidence in the absence of external feedback
Continuous theta-burst stimulation (cTBS) over the lateral prefrontal cortex alters reinforcement learning bias
Individual differences and the neural representations of reward expectation and reward prediction error
Self-regulation of the anterior insula: Reinforcement learning using real-time fMRI neurofeedback
Single Dose of a Dopamine Agonist Impairs Reinforcement Learning in Humans: Evidence from Event-related Potentials and Computational Modeling of Striatal-Cortical Function
Reinforcement Learning Signal Predicts Social Conformity
States versus Rewards: Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning
Neural signatures of experience-based improvements in deterministic decision-making
Dissociating the Contributions of Independent Corticostriatal Systems to Visual Categorization Learning Through the Use of Reinforcement Learning Modeling and Granger Causality Modeling
Brain mechanism of reward prediction under predictable and unpredictable environmental dynamics
Multiple brain networks contribute to the acquisition of bias in perceptual decision-making
Striatum and Insula Dysfunction during Reinforcement Learning Differentiates Abstinent and Relapsed Methamphetamine Dependent Individuals
Altered activation in association with reward-related trial-and-error learning in patients with schizophrenia
Remedial action and feedback processing in a time-estimation task: Evidence for a role of the rostral cingulate zone in behavioral adjustments without learning
Neurocomputational mechanisms of prosocial learning and links to empathy
Decision Making: Neural Mechanisms: Neural basis of decision making guided by emotional outcomes
fMRI evidence of a relationship between hypomania and both increased goal-sensitivity and positive outcome-expectancy bias
Error-Likelihood Prediction in the Medial Frontal Cortex: A Critical Evaluation
Neural correlates of state-based decision-making in younger and older adults
Theta oscillations integrate functionally segregated sub-regions of the medial prefrontal cortex
Cocaine dependent individuals with attenuated striatal activation during reinforcement learning are more susceptible to relapse
Impaired implicit learning and feedback processing after stroke
Neural Regions that Underlie Reinforcement Learning Also Engage in Social Expectancy Violations
Determining a Role for Ventromedial Prefrontal Cortex in Encoding Action-Based Value Signals During Reward-Related Decision Making
Brain and behavioral evidence for altered social learning mechanisms among women with assault-related posttraumatic stress disorder
Medial Prefrontal Cortex Predicts and Evaluates the Timing of Action Outcomes
Two Sides of the Same Coin: Learning via Positive and Negative Reinforcers in the Human Striatum