Taming Wild Reward Functions: The Score Function Gradient Estimator Trick

MLE is often not enough to train sequence-to-sequence neural networks in NLP. Instead we employ an external metric, which is a reward function that can help us judge model outputs. The parameters of the network are then updated on the basis of the model outputs and corresponding rewards.

For this update, it is necessary to obtain a derivative.

But how can we do this, if the external function is unknown or cannot be derived?

Enter: The score function gradient estimator trick.

Why MLE is not Enough

Traditionally, neural networks are trained using Maximum Likelihood Estimation (MLE): given an input sequence and a corresponding gold target sequence , we want to increase the probability that the current model with parameters assigns for the pair . This gives the following loss function:

where

The parameters of are then updated using stochastic gradient descent,

But there are various issues with using MLE that has led researchers to explore alternative objectives. Let’s looks at them next.

1. Gold targets are not Available

This is most prominently the case in many domains of semantic parsing for question-answering, where questions are mapped to a semantic parse , which can be executed to obtain an answer . For many domains, it is easier to collect question-answer pairs, rather than question-parse pairs (e.g. see Berant et al. 2013). But with no gold parses available, MLE cannot be applied.

What can we do instead?

The current model produces a set of likely parses (e.g. by sampling from the model distribution or by employing beam search). Each parse is then executed to obtain an answer. Next, we compare the answer to the gold answer to get a reward . Generally, we have if there is no overlap between answer and gold answer and if they match exactly. With this, we can update the model’s parameters.

2. Exposure Bias: Ranzato et al. 2016

During traditional MLE training the model is fed the perfect tokens from the available gold target , but at test time the output sequence is produced on the basis of the model distribution. This causes a distribution mismatch and inferior performance.

How can we reduce this mismatch?

Instead, we can feed model output sequences already at training time. Typically, once an entire output sequence has been produced, this sequence is judged by an external metric and the resulting reward can be used as feedback to update the model’s parameters.

3. Loss-Evaluation Mismatch: Wiseman & Rush 2016

MLE is agnostic to the final evaluation metric. Ideally we would like to have the final evaluation metric in the objective used at training time, so that the parameters of the model are specifically tuned to perform well on the intended task.

How can we do that?

Similar to problem (2.), we can feed model output sequences at training time. In this case the external metric is the final evaluation metric. For example, in the case of machine translation, typically a per-sentence approximation of the BLEU score is used.

Maximise the Expected Reward Obtained for Model Outputs

To solve all three problems, we can instead maximise the expected reward or, equivalently, minimise the expected risk . This can be formulated as the following expectation:

where is the probability distribution over inputs and is the probability distribution over outputs given .

In praxis, this expectation has to be approximated. For example, using Monte-Carlo sampling leads to the REINFORCE algorithm (Williams 1992): we sample one output from the model distribution (see also Chapter 13 of Sutton & Barto 2018). Approximating the expectation over , the actual training objective becomes:

The goal of this objective is to increase the probability of an output proportionally to its reward. The gradient of the REINFORCE objective is an unbiased estimate of the gradient of the objective.

Alternatively, we can use Minimum Risk Training (MRT) (Smith & Eisner ‘06, Shen et al. 2016). Here, several outputs are sampled from the model distribution. This stabilises learning, but requires that more outputs are evaluated to get corresponding rewards. Assuming sampled outputs, the objective then takes the following form:

Due to sampling, both approaches can suffer from high variance, which can be combatted using control variates (see for example Chapter 9 of Ross 2013).

The Problem: The Reward Function cannot be Derived

To minimize with stochastic gradient descent, it is necessary to calculate , also called the policy gradient in Reinforcement Learning (RL) terms.

But in praxis, the rewards are typically either from an unknown function (e.g. if rewards are collected from human users) or the underlying function cannot be derived (e.g. in the case of BLEU).

As such, it is not immediately clear how to derive , i.e. how to calculate

The Solution: Score Function Gradient Estimator

To be able to calculate , we use two tricks:

1. The Derivative Trick

The derivative of the logarithm is:

2. The Identity Trick

Now we can formulate what is known as the score function gradient estimator (Fu ‘06):

Let’s investigate for each line what happened:

  • (2): The expectation is expanded into two integrals. becomes and turns into .
  • (3): Integral and differentiation can be switched, so we move in front of because is the only term dependent on .
  • (4): We use the identity trick with .
  • (5): We use the derivative trick.
  • (6): We still have available. With this, we can transform the expression back into an expectation. But in contrast to before, we now have and this derivative is simply scaled by .

We no longer need to know what the function that produces looks like or derive it.

For an alternative view on the subject, also see this great blog post.

When can it be applied?

The score function gradient estimator can be applied independent of the underlying model, as long as it has a derivative.

E.g. if is a log-linear model with feature vectors ,

then the derivative would be

In the case of neural networks, backpropogation is applied to derive (see for example Chapter 3 of Cho 2015).

Lessons Learnt

  • MLE can sometimes not be applied or cause inferior performance.
  • Instead, we leverage rewards from an external metric that evaluates the quality of our model ouputs.
  • The metric might be unknown or cannot be derived: (stochastic) gradient descent cannot be applied directly.
  • The score function gradient estimator helps us side-step this problem.

Acknowledgment: Thanks to Julia Kreutzer for her valuable and much needed feedback for improving this post.

Disclaimer: This blogpost reflects solely the opinion of the author, not any of her affiliated organizations and makes no claim or warranties as to completeness, accuracy and up-to-dateness.

Comments, ideas and critical views are very welcome. We appreciate your feedback! If you want to cite this blogpost, use this bibfile.