Differentiable neural computer

From HandWiki
Short description: Artificial neural network architecture
A differentiable neural computer being trained to store and recall dense binary numbers. Performance of a reference task during training shown. Upper left: the input (red) and target (blue), as 5-bit words and a 1 bit interrupt signal. Upper right: the model's output.

In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented neural network architecture (MANN), which is typically (but not by definition) recurrent in its implementation. The model was published in 2016 by Alex Graves et al. of DeepMind.[1]

Applications

DNC indirectly takes inspiration from Von-Neumann architecture, making it likely to outperform conventional architectures in tasks that are fundamentally algorithmic that cannot be learned by finding a decision boundary.

So far, DNCs have been demonstrated to handle only relatively simple tasks, which can be solved using conventional programming. But DNCs don't need to be programmed for each problem, but can instead be trained. This attention span allows the user to feed complex data structures such as graphs sequentially, and recall them for later use. Furthermore, they can learn aspects of symbolic reasoning and apply it to working memory. The researchers who published the method see promise that DNCs can be trained to perform complex, structured tasks[1][2] and address big-data applications that require some sort of reasoning, such as generating video commentaries or semantic text analysis.[3][4]

DNC can be trained to navigate rapid transit systems, and apply that network to a different system. A neural network without memory would typically have to learn about each transit system from scratch. On graph traversal and sequence-processing tasks with supervised learning, DNCs performed better than alternatives such as long short-term memory or a neural turing machine.[5] With a reinforcement learning approach to a block puzzle problem inspired by SHRDLU, DNC was trained via curriculum learning, and learned to make a plan. It performed better than a traditional recurrent neural network.[5]

Architecture

DNC system diagram

DNC networks were introduced as an extension of the Neural Turing Machine (NTM), with the addition of memory attention mechanisms that control where the memory is stored, and temporal attention that records the order of events. This structure allows DNCs to be more robust and abstract than a NTM, and still perform tasks that have longer-term dependencies than some predecessors such as Long Short Term Memory (LSTM). The memory, which is simply a matrix, can be allocated dynamically and accessed indefinitely. The DNC is differentiable end-to-end (each subcomponent of the model is differentiable, therefore so is the whole model). This makes it possible to optimize them efficiently using gradient descent.[3][6][7]

The DNC model is similar to the Von Neumann architecture, and because of the resizability of memory, it is Turing complete.[8]

Traditional DNC

DNC, as originally published[1]

Independent variables
[math]\displaystyle{ \mathbf{x}_t }[/math] Input vector
[math]\displaystyle{ \mathbf{z}_t }[/math] Target vector
Controller
[math]\displaystyle{ \boldsymbol\chi_t = [\mathbf{x}_t; \mathbf{r}_{t-1}^1; \cdots; \mathbf{r}_{t-1}^R] }[/math] Controller input matrix


Deep (layered) LSTM [math]\displaystyle{ \forall\;0\leq l\leq L }[/math]
[math]\displaystyle{ \mathbf{i}_t^l = \sigma(W_{i}^l [\boldsymbol\chi_t; \mathbf{h}_{t-1}^l; \mathbf{h}_t^{l-1}] + \mathbf{b}_i^l) }[/math] Input gate vector
[math]\displaystyle{ \mathbf{o}_t^l = \sigma(W_{o}^l [\boldsymbol\chi_t; \mathbf{h}_{t-1}^l; \mathbf{h}_t^{l-1}] + \mathbf{b}_o^l) }[/math] Output gate vector
[math]\displaystyle{ \mathbf{f}_t^l = \sigma(W_{f}^l [\boldsymbol\chi_t; \mathbf{h}_{t-1}^l; \mathbf{h}_t^{l-1}] + \mathbf{b}_f^l) }[/math] Forget gate vector
[math]\displaystyle{ \mathbf{s}_t^l = \mathbf{f}_t^l \mathbf{s}_{t-1}^l + \mathbf{i}_t^l\tanh(W_{s}^l [\boldsymbol\chi_t; \mathbf{h}_{t-1}^l; \mathbf{h}_t^{l-1}] + \mathbf{b}_s^l) }[/math] State gate vector,
[math]\displaystyle{ s_0 = 0 }[/math]
[math]\displaystyle{ \mathbf{h}_t^l = \mathbf{o}_t^l \tanh(\mathbf{s}_t^l) }[/math] Hidden gate vector,
[math]\displaystyle{ h_0=0; h_t^0=0\;\forall\;t }[/math]


[math]\displaystyle{ \mathbf{y}_t=W_y[\mathbf{h}_t^1;\cdots;\mathbf{h}_t^L]+W_r[\mathbf{r}_t^1;\cdots;\mathbf{r}_t^R] }[/math] DNC output vector
Read & Write heads
[math]\displaystyle{ \xi_t = W_\xi[h_t^1;\cdots;h_t^L] }[/math] Interface parameters
[math]\displaystyle{ =[\mathbf{k}_t^{r,1};\cdots;\mathbf{k}_t^{r,R};\hat{\beta}_t^{r,1};\cdots;\hat{\beta}_t^{r,R};\mathbf{k}_t^w;\hat{\beta_t^w};\mathbf{\hat{e}}_t;\mathbf{v}_t;\hat{f_t^1};\cdots;\hat{f_t^R};\hat{g}_t^a;\hat{g}_t^w;\hat{\boldsymbol\pi}_t^1;\cdots;\hat{\boldsymbol\pi}_t^R] }[/math]


Read heads [math]\displaystyle{ \forall\;1\leq i\leq R }[/math]
[math]\displaystyle{ \mathbf{k}_t^{r,i} }[/math] Read keys
[math]\displaystyle{ \beta_t^{r,i}=\text{oneplus}(\hat{\beta}_t^{r,i}) }[/math] Read strengths
[math]\displaystyle{ f_t^i=\sigma(\hat{f}_t^i) }[/math] Free gates
[math]\displaystyle{ \boldsymbol\pi_t^i=\text{softmax}(\hat{\boldsymbol\pi}_t^i) }[/math] Read modes,
[math]\displaystyle{ \boldsymbol\pi_t^i\in\mathbb{R}^3 }[/math]


Write head
[math]\displaystyle{ \mathbf{k}_t^w }[/math] Write key
[math]\displaystyle{ \beta_t^w=\hat{\beta}_t^w }[/math] Write strength
[math]\displaystyle{ \mathbf{e}_t=\sigma(\mathbf{\hat{e}}_t) }[/math] Erase vector
[math]\displaystyle{ \mathbf{v}_t }[/math] Write vector
[math]\displaystyle{ g_t^a=\sigma(\hat{g}_t^a) }[/math] Allocation gate
[math]\displaystyle{ g_t^w=\sigma(\hat{g}_t^w) }[/math] Write gate
Memory
[math]\displaystyle{ M_t=M_{t-1}\circ(E-\mathbf{w}_t^w\mathbf{e}_t^\intercal)+\mathbf{w}_t^w\mathbf{v}_t^\intercal }[/math] Memory matrix,
Matrix of ones [math]\displaystyle{ E\in\mathbb{R}^{N\times W} }[/math]
[math]\displaystyle{ \mathbf{u}_t=(\mathbf{u}_{t-1}+\mathbf{w}_{t-1}^w-\mathbf{u}_{t-1}\circ\mathbf{w}_{t-1}^w)\circ\boldsymbol\psi_t }[/math] Usage vector
[math]\displaystyle{ \mathbf{p}_t=\left(1-\sum_i\mathbf{w}_t^w[i]\right)\mathbf{p}_{t-1}+\mathbf{w}_t^w }[/math] Precedence weighting,
[math]\displaystyle{ \mathbf{p}_0=\mathbf{0} }[/math]
[math]\displaystyle{ L_t=(\mathbf{1} - \mathbf{I})\left[(1-\mathbf{w}_t^w[i]-\mathbf{w}_t^j)L_{t-1}[i,j]+\mathbf{w}_t^w[i]\mathbf{p}_{t-1}^j\right] }[/math] Temporal link matrix,
[math]\displaystyle{ L_0=\mathbf{0} }[/math]
[math]\displaystyle{ \mathbf{w}_t^w=g_t^w[g_t^a\mathbf{a}_t+(1-g_t^a)\mathbf{c}_t^w] }[/math] Write weighting
[math]\displaystyle{ \mathbf{w}_t^{r,i}=\boldsymbol\pi_t^i[1]\mathbf{b}_t^i+\boldsymbol\pi_t^i[2]c_t^{r,i}+\boldsymbol\pi_t^i[3]f_t^i }[/math] Read weighting
[math]\displaystyle{ \mathbf{r}_t^i=M_t^\intercal\mathbf{w}_t^{r,i} }[/math] Read vectors


[math]\displaystyle{ \mathcal{C}(M,\mathbf{k},\beta)[i]=\frac{\exp\{\mathcal{D}(\mathbf{k},M[i,\cdot])\beta\}}{\sum_j\exp\{\mathcal{D}(\mathbf{k},M[j,\cdot])\beta\}} }[/math] Content-based addressing,
Lookup key [math]\displaystyle{ \mathbf{k} }[/math], key strength [math]\displaystyle{ \beta }[/math]
[math]\displaystyle{ \phi_t }[/math] Indices of [math]\displaystyle{ \mathbf{u}_t }[/math],
sorted in ascending order of usage
[math]\displaystyle{ \mathbf{a}_t[\phi_t[j]]=(1-\mathbf{u}_t[\phi_t[j]])\prod_{i=1}^{j-1}\mathbf{u}_t[\phi_t[i]] }[/math] Allocation weighting
[math]\displaystyle{ \mathbf{c}_t^w=\mathcal{C}(M_{t-1},\mathbf{k}_t^w,\beta_t^w) }[/math] Write content weighting
[math]\displaystyle{ \mathbf{c}_t^{r,i}=\mathcal{C}(M_{t-1},\mathbf{k}_t^{r,i},\beta_t^{r,i}) }[/math] Read content weighting
[math]\displaystyle{ \mathbf{f}_t^i=L_t\mathbf{w}_{t-1}^{r,i} }[/math] Forward weighting
[math]\displaystyle{ \mathbf{b}_t^i=L_t^\intercal\mathbf{w}_{t-1}^{r,i} }[/math] Backward weighting
[math]\displaystyle{ \boldsymbol\psi_t=\prod_{i=1}^R\left(\mathbf{1}-f_t^i\mathbf{w}_{t-1}^{r,i}\right) }[/math] Memory retention vector
Definitions
[math]\displaystyle{ \mathbf{W},\mathbf{b} }[/math] Weight matrix, bias vector
[math]\displaystyle{ \mathbf{0},\mathbf{1},\mathbf{I} }[/math] Zeros matrix, ones matrix, identity matrix
[math]\displaystyle{ \circ }[/math] Element-wise multiplication
[math]\displaystyle{ \mathcal{D}(\mathbf{u},\mathbf{v})=\frac{\mathbf{u}\cdot\mathbf{v}}{\|\mathbf{u}\|\|\mathbf{v}\|} }[/math] Cosine similarity
[math]\displaystyle{ \sigma(x)=1/(1+e^{-x}) }[/math] Sigmoid function
[math]\displaystyle{ \text{oneplus}(x)=1+\log(1+e^x) }[/math] Oneplus function
[math]\displaystyle{ \text{softmax}(\mathbf{x})_j = \frac{e^{x_j}}{\sum_{k=1}^K e^{x_k}} }[/math]    for j = 1, ..., K. Softmax function

Extensions

Refinements include sparse memory addressing, which reduces time and space complexity by thousands of times. This can be achieved by using an approximate nearest neighbor algorithm, such as Locality-sensitive hashing, or a random k-d tree like Fast Library for Approximate Nearest Neighbors from UBC.[9] Adding Adaptive Computation Time (ACT) separates computation time from data time, which uses the fact that problem length and problem difficulty are not always the same.[10] Training using synthetic gradients performs considerably better than Backpropagation through time (BPTT).[11] Robustness can be improved with use of layer normalization and Bypass Dropout as regularization.[12]

See also

References

  1. 1.0 1.1 1.2 Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward et al. (2016-10-12). "Hybrid computing using a neural network with dynamic external memory" (in en). Nature 538 (7626): 471–476. doi:10.1038/nature20101. ISSN 1476-4687. PMID 27732574. Bibcode2016Natur.538..471G. http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz. 
  2. "Differentiable neural computers | DeepMind". https://deepmind.com/blog/differentiable-neural-computers/. 
  3. 3.0 3.1 Burgess, Matt. "DeepMind's AI learned to ride the London Underground using human-like reason and memory" (in en-GB). WIRED UK. https://www.wired.co.uk/article/deepmind-ai-tube-london-underground. 
  4. Jaeger, Herbert (2016-10-12). "Artificial intelligence: Deep neural reasoning" (in en). Nature 538 (7626): 467–468. doi:10.1038/nature19477. ISSN 1476-4687. PMID 27732576. Bibcode2016Natur.538..467J. 
  5. 5.0 5.1 James, Mike. "DeepMind's Differentiable Neural Network Thinks Deeply". http://www.i-programmer.info/news/105-artificial-intelligence/10174-deepminds-differential-nn-thinks-deeply.html. 
  6. "DeepMind AI 'Learns' to Navigate London Tube". PCMAG. https://www.pcmag.com/news/348701/deepmind-ai-learns-to-navigate-london-tube. 
  7. Mannes, John. "DeepMind's differentiable neural computer helps you navigate the subway with its memory". https://techcrunch.com/2016/10/13/__trashed-2/. 
  8. "RNN Symposium 2016: Alex Graves - Differentiable Neural Computer". https://www.youtube.com/watch?v=steioHoiEms?t=721. 
  9. Jack W Rae; Jonathan J Hunt; Harley, Tim; Danihelka, Ivo; Senior, Andrew; Wayne, Greg; Graves, Alex; Timothy P Lillicrap (2016). "Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes". arXiv:1610.09027 [cs.LG].
  10. Graves, Alex (2016). "Adaptive Computation Time for Recurrent Neural Networks". arXiv:1603.08983 [cs.NE].
  11. Jaderberg, Max; Wojciech Marian Czarnecki; Osindero, Simon; Vinyals, Oriol; Graves, Alex; Silver, David; Kavukcuoglu, Koray (2016). "Decoupled Neural Interfaces using Synthetic Gradients". arXiv:1608.05343 [cs.LG].
  12. Franke, Jörg; Niehues, Jan; Waibel, Alex (2018). "Robust and Scalable Differentiable Neural Computer for Question Answering". arXiv:1807.02658 [cs.CL].

External links