Bidirectional Sequence Generation and its Prototype Transfer Working at an industrial research lab means both conducting research and then applying the research. In the first part of the talk, I will describe the research that lead to a method which improves sequence generation for NLP, demonstrated on two chatbot dialogue tasks. In the second part, I will present how the research of the first part has resulted in a framework for prototype transfer, sparked further research and first applications. Research abstract for the first part: Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.