# Origin story of the transformer architecture > 트랜스포머 아키텍쳐의 오리진 스토리. Supremacy: AI, ChatGPT, and the Race that will Change the World 9장의 주요 내용. [트랜스포머 아키텍쳐](https://wiki.g15e.com/pages/Transformer%20architecture.txt)의 오리진 스토리. [Supremacy: AI, ChatGPT, and the Race that will Change the World](https://wiki.g15e.com/pages/Supremacy%20-%20AI,%20ChatGPT,%20and%20the%20Race%20that%20will%20Change%20the%20World.txt) 9장의 주요 내용. --- , , . "Recurrent" 없는 [RNN](https://wiki.g15e.com/pages/Recurrent%20neural%20network.txt): > The transformer's invention in 2017 was about as impactful to the field of AI as the advent of smartphones was for consumers. … > > The culture of complacency partly came from having so many talented scientists on staff, like . The bar was high, and Google was already using cutting-edge AI techniques, like [RNN](https://wiki.g15e.com/pages/Recurrent%20neural%20network.txt)s, to process billions of words of text every day. > > If you were a young AI research like , you were sitting next to the people who'd invented these techniques. In early 2017, (he) was spitballing with two other researchers, and . > > Researchers like had been looking into the concept of "" in AI, which is then a computer can pick out the most important information in a dataset. Over their salads and sandwiches, the trio wondered if they could use that same technique to translate words more quickly and accurately. … > > They were talking about removing the "recurrent" element of [RNN](https://wiki.g15e.com/pages/Recurrent%20neural%20network.txt)s, which was crazy. … 합류: > Soon after he () joined the ragtag group of researchers, Shazeer figured out some tricks that helped the new model work with large amounts of data. … "트랜스포머"라는 이름: > Soon there were eight researchers working on the as-yet-unnamed project, writing code and refining the architecture of what they were calling **the transformer**. The name referred to a system that could transform any input into any output, and while the scientists focused on translating language, their system would eventually do far more. … [Coreference resolution](https://wiki.g15e.com/pages/Coreference%20resolution.txt): > … was stunned to find that the system was doing something called [coreference resolution](https://wiki.g15e.com/pages/Coreference%20resolution.txt). … "It's a classic intelligence test [that] AI's failed on." Jones says. But when they ed those same sentences into the transformer, the researchers could see something unusual happening to its "attention head." [Attention is all you need](https://wiki.g15e.com/pages/Attention%20is%20all%20you%20need.txt) 저술 시작: > About six months after those first conversations over lunch, the researchers wrote up their findings. had already left [Google](https://wiki.g15e.com/pages/Google.txt), but everyone else kept the project going and stayed in the office till midnight to wrap everything up. , who was the lead author, slept on a nearby couch overnight. 제목: > looked up from his desk, nearby. "I'm not very good with titles," he replied. "But how about 'Attention is all you need'?" It was a random thought that had popped into his head, and didn't say anything in agreement. In fact, he got up and walked away, Jones recalls. > > But later, the title "Attention Is All You Need" landed on the front page of their paper, a perfect summary of what they'd discovered. 느려터진 구글: > (Transformer) had the potential to supercharge [AI](https://wiki.g15e.com/pages/Artificial%20intelligence.txt) systems, but [Google](https://wiki.g15e.com/pages/Google.txt) was slow off the mark to do anything about them. It took several years, for instance, for Google to plug transformers into services like or , a [LLM](https://wiki.g15e.com/pages/Large%20language%20model.txt) that it developed to make its search engine better at processing the nuance of human language. … > > Google's cautious approach was largely a product of bloat. The downside to being one of the largest companies of all time, with a monopolistic grip on the search market, is that everything moves at a snail's pace. You're constantly afraid of public backlash or regulartory scrutiny. Your prime concern is maintaining growth and dominance. 공저자 모두가 구글에서 퇴사: > Frustrated, left Google in <2021> to pursue his research on [LLM](https://wiki.g15e.com/pages/Large%20language%20model.txt)s independently, cofounding a chatbot company called [character.ai](https://wiki.g15e.com/pages/character.ai.txt). … Of the eight researchers who invented the transformer, all have now left Google. : > Soon, Google was going to experience what describes as a "biblical moment." As Google continued printing money from its advertising business, OpenAI was taking what looked like a monumemtal step toward , and it wasn't keeping anything under wraps.