Article: Reinforcement Learning - A Technical Introduction.
Most readers are familiar with two different types of machine learning: On the one hand we have supervised learning. In supervised learning the algorithm learns from a training set of labeled examples provided by a knowledgeable external supervisor. On the other hand we have unsupervised learning, where the general task is to find hidden structures in unlabeled data. Surprisingly reinforcement learning (RL) is different from supervised and unsupervised learning, since it tries to maximize some utility function concurrent with learning a reward signal coming from an environment that is under the influence of the algorithm.
Consequently every reinforcement learning algorithm has to exploit what it already knows about the environment in order to obtain more rewards, but it also has to explore a still incomplete known, i.e. stochastic or dynamic environment and its causal relations in order to choose better actions for future situations. So, Elmar Diederichs made a research on reinforcement Learning and the survey result published on the journal of autonomous intelligence.
In this paper ,the author think that reinforcement learning provides a cognitive science perspective to behavior and sequential decision making provided that RL-algorithms introduce a computational concept of agency to the learning problem. Hence it addresses an abstract class of problems that can be characterized as follows: An algorithm confronted with information from an unknown environment is supposed to find stepwise an optimal way to behave based only on some sparse, delayed or noisy feedback from some environment, that changes according to the algorithm's behavior. Hence reinforcement learning offers an abstraction to the problem of goal-directed learning from interaction.
The paper offers an opintionated introduction in the algorithmic advantages and drawbacks of several algorithmic approaches such that one can understand recent developments and open problems in reinforcement learning.
For more information, please visit: