AIニュースで英語学習 生成AI、トークン工場時代へ──Preferred Networks岡野原氏が国産AIの必要性を訴え - YouTube

AIに関する最新ニュースを題材に、英語のリスニング力を鍛えましょう!今回のテーマ:生成AI、トークン工場時代へ──Preferred Networks岡野原氏が国産AIの必要性を訴え引…

📰 ENGLISH NEWS

生成AI、トークン工場時代へ──Preferred Networks岡野原氏が国産AIの必要性を訴え

#英語学習#英語ニュース#リスニング#シャドーイング
📺はじめての方へ — このシリーズについて

YouTubeチャンネル「つくもち英語部」の連動コンテンツ置き場。

AI・データサイエンス分野の最新ニュースを題材に、エンジニア特有の英語表現を日英対訳で学べます。

動画スクリプトに加え、専門用語の解説や関連するG検定項目への内部リンクも併設。

「英語で技術情報をキャッチアップする力」を、技術知識と一緒に鍛えるのが狙いです。

理系のキャリアをグローバルに広げたい人のための、実務直結型の英語学習リソースです。

📌 ニュース概要

Preferred Networksの岡野原大輔氏は講演で、生成AIが急速な進化を遂げる中でAI自身が学習を改善する段階に入り、巨大な「トークン工場」としてのデータセンター投資が加速している現状を解説しました。日本の状況については、米中に比して開発が遅れデジタル赤字や供給リスクに直面していると指摘し、機密保持や文化保護の観点から純国産AIと独自半導体の開発を継続する重要性を強調しました。AI時代における人間の役割は、適切な目標設定やタスクの分解、プロセスの監査といった「環境設計」にあると提言しています。

📖 英文と日本語訳(一文ずつ)

1ENGLISH

On April 24, 2026, Daisuke Okanohara, co-founder and president of Preferred Networks, delivered a lecture titled "The Evolution and Future of Generative AI: Domestic AI and Physical AI" at the Policy Research Institute of the Ministry of Finance.

和訳

Preferred Networks共同創業者で代表取締役社長の岡野原大輔氏は、2026年4月24日に財務省財務総合政策研究所で「生成AIの進化と未来-国産AIとフィジカルAI」と題する講演を行いました

2ENGLISH

Mr. Okanohara first pointed out that generative AI has evolved phenomenally over the past decade, and that various generative tasks, such as language, images, and robot control, have come to be handled in a unified manner.

和訳

岡野原氏はまず、生成AIがこの10年間で驚異的に進化し、言語、画像、ロボット制御といった様々な生成タスクが統一的に扱われるようになったと指摘しました

3ENGLISH

Three factors were cited as the background: a 4.6-fold annual increase in computing power used for AI development, the exponential growth of training data which has now reached the scale of tens of trillions of characters, and the publication of hundreds of thousands of research papers annually.

和訳

その背景として、AI開発に使われる計算力が毎年4.6倍に増加していること、学習データが指数的に増加し現在は数十兆文字規模に達していること、年間数十万本の研究論文が発表されていることの3点を挙げました

4ENGLISH

AI performance is improving steadily and predictably, reaching passing levels for the University of Tokyo entrance exams and the National Medical Practitioners Qualifying Examination. OpenAI’s o3-pro also achieved a 21% accuracy rate on “Humanity’s Last Exam,” a collection of the most difficult problems gathered from 1,000 researchers.

和訳

AIの性能は予測可能な形で着実に向上しており、東大入試や医師国家試験などの合格水準に到達しているほか、研究者1000人から集めた最難関問題群「Humanity's Last Exam」でもOpenAIのo3-proが21%の正解率を達成しています

5ENGLISH

The cost of providing equivalent intelligence continues to fall by a factor of 100 to 1,000 annually, while long-term task processing capabilities are doubling every four months. It is reported that Claude Opus 4.6 is achieving a 50% success rate on tasks that take a human 12 hours to complete.

和訳

同じ賢さを提供するコストは1年あたり100分の1から1000分の1のペースで下がり続けており、長期タスク処理能力は4ヶ月ごとに倍増し、Claude Opus 4.6は人間が12時間かかるタスクを50%の確率で成功させているといいます

6ENGLISH

Furthermore, autonomous research such as "AutoResearch" has emerged, in which LLMs improve their own learning while humans sleep.

和訳

さらに、人間が寝ている間にLLM自身がLLMの学習を改善する「AutoResearch」のような自律的研究も登場しています

7ENGLISH

Regarding the direction of AI development, it was explained that while improvements driven by the scaling laws of pre-training—previously the mainstream approach—are reaching a plateau, rapid progress is being made in enhancing training data quality through AI itself and improving reasoning capabilities via reinforcement learning with verifiable rewards.

和訳

AI開発の方向性については、これまで主流だった事前学習のスケーリング則による改善が頭打ちになりつつある一方で、AI自身による学習データの品質改善や、検証可能な報酬を用いた強化学習による推論能力の向上が急速に進んでいると説明しました

8ENGLISH

It was noted that gigawatt-class data centers and development teams of thousands are being utilized for training. Capital investment in U.S. AI data centers has surpassed 1% of GDP, and some predict it could expand to 6%, mirroring the railway investment boom of the 1880s.

和訳

学習にはギガワット級のデータセンターと数千人規模の開発チームが投入されており、米国のAIデータセンターへの設備投資はGDP比1%を超え、1880年代の鉄道投資と同様にGDP比6%まで拡大するとの予測もあると紹介しました

9ENGLISH

On the demand side, software development support accounts for more than half of current LLM demand, with the market size estimated to range from several trillion to tens of trillions of yen.

和訳

需要面では、現在のLLM需要の半分以上をソフトウェア開発支援が占めており、市場規模は数兆円から数十兆円に上ると試算しています

10ENGLISH

OpenAI’s weekly active users have surpassed 900 million, with annualized revenue reaching $24 billion as of February 2026. Meanwhile, Anthropic has continued to grow at an annual rate of more than tenfold over the past three years, with its annualized revenue reaching $14 billion.

和訳

OpenAIの週次利用ユーザーは9億人を超え、2026年2月時点の年換算売上は240億ドル、Anthropicは過去3年間年率10倍超の成長を続け年換算売上は140億ドルに達しました

11ENGLISH

An estimated 140 trillion tokens are processed daily in China, with ByteDance alone accounting for 30 trillion tokens and Google for 43 trillion.

和訳

中国では1日あたり140兆トークンが処理され、ByteDance1社で30兆トークン、Googleで43兆トークンに上ると推定されます

12ENGLISH

Mr. Okanohara positioned AI data centers as "token factories," presenting an estimate that one gigawatt of power could support a revenue potential of $50 billion (7.5 trillion yen), 23 trillion tokens per day, and the equivalent of one million GPUs.

和訳

岡野原氏はAIデータセンターを「トークン工場」と位置づけ、1ギガワットの電力で500億ドル(7.5兆円)分の売上ポテンシャル、23兆トークン/日、100万GPU相当に対応するとの試算を示しました

13ENGLISH

In its GTC 2026 keynote, NVIDIA projected the AI infrastructure market to reach $1 trillion, declaring a shift from the era of training to the era of inference.

和訳

NVIDIAはGTC 2026の講演でAIインフラ市場が1兆ドル規模になると予測し、学習から推論の時代への転換を宣言しています

14ENGLISH

Regarding the situation in Japan, the analysis found that China lags several months behind the United States, while Japan and other countries are approximately one year behind.

和訳

日本の状況については、米国に対し中国が数ヶ月遅れ、日本などその他の国は約1年遅れていると分析しました

15ENGLISH

An analysis of the digital deficit structure estimated from OpenAI’s cost structure pointed out that AI semiconductors are the largest factor for capital outflow, with only 20% to 30% of the total currently remaining within the country.

和訳

OpenAIの費用構造から推計したデジタル赤字の構図では、AI半導体が最も大きな海外流出要因であり、現状は全体の2~3割しか国内に残らないと指摘しました

16ENGLISH

Risks associated with the use of closed models were cited, including Anthropic’s large-scale analysis of user requests, a proposal to mandate investment in the U.S. for countries purchasing AI semiconductors, and the temporary designation of Claude as a "supply chain risk" by the U.S. Department of Defense due to a conflict with Anthropic.

和訳

クローズドモデル利用のリスクとして、Anthropicによるユーザーリクエストの大規模解析、AI半導体購入国への対米投資義務付け案の浮上、米国防総省とAnthropicの対立によるClaudeの一時的な「サプライチェーンリスク」指定などを挙げました

17ENGLISH

Based on these considerations, Okanohara emphasized that the continued development of domestic generative AI is of critical importance for avoiding dependence on foreign suppliers, accumulating development know-how, ensuring secure closed-network operations for confidential information, eliminating bias, reducing the digital trade deficit, and protecting Japanese culture and social norms.

和訳

これらを踏まえ岡野原氏は、AI供給の他国依存回避、開発ノウハウの蓄積、機密情報を扱う閉域運用、バイアス排除、デジタル赤字解消、日本文化や社会規範の保護といった観点から、国産生成AIの開発継続が極めて重要だと強調しました

18ENGLISH

Preferred Networks highlighted its initiatives, including PLaMo, a domestically developed foundation model with world-class Japanese language performance; Matlantis, a materials discovery simulator used by over 150 organizations; Kachaka Pro, which holds the top share (47.9%) of the domestic autonomous mobile robot (AMR) market; MN-Core L1000, a next-generation AI semiconductor featuring 3D-stacked DRAM scheduled for release in 2027; and a cybersecurity business in collaboration with GMO.

和訳

Preferred Networksの取り組みとしては、世界最高クラスの日本語性能を持つ純国産基盤モデル「PLaMo」、150超の組織で利用される材料開発シミュレーター「Matlantis」、国内AMR市場シェア1位(47.9%)の「カチャカプロ」、3D積層DRAMを採用する次世代AI半導体「MN-Core L1000」(2027年提供予定)、GMOと連携したサイバーセキュリティ事業などを紹介しました

19ENGLISH

Finally, addressing human work styles in the AI era, the presentation highlighted the importance of defining goals and constraints, managing data access permissions, decomposing tasks into appropriate levels of granularity, and continuous auditing and intervention, concluding that proper environment design (harness) and task management will be the key.

和訳

最後にAI時代の人間の働き方として、目標と制約の決定、データアクセス権限の管理、適切な粒度へのタスク分解、逐次監査・介入の重要性を挙げ、適切な環境設計(ハーネス)とタスク管理が鍵になると結びました

🎧 通し読み(全文)

リスニング・シャドーイング用の全文です。

On April 24, 2026, Daisuke Okanohara, co-founder and president of Preferred Networks, delivered a lecture titled "The Evolution and Future of Generative AI: Domestic AI and Physical AI" at the Policy Research Institute of the Ministry of Finance. Mr. Okanohara first pointed out that generative AI has evolved phenomenally over the past decade, and that various generative tasks, such as language, images, and robot control, have come to be handled in a unified manner. Three factors were cited as the background: a 4.6-fold annual increase in computing power used for AI development, the exponential growth of training data which has now reached the scale of tens of trillions of characters, and the publication of hundreds of thousands of research papers annually. AI performance is improving steadily and predictably, reaching passing levels for the University of Tokyo entrance exams and the National Medical Practitioners Qualifying Examination. OpenAI’s o3-pro also achieved a 21% accuracy rate on “Humanity’s Last Exam,” a collection of the most difficult problems gathered from 1,000 researchers. The cost of providing equivalent intelligence continues to fall by a factor of 100 to 1,000 annually, while long-term task processing capabilities are doubling every four months. It is reported that Claude Opus 4.6 is achieving a 50% success rate on tasks that take a human 12 hours to complete. Furthermore, autonomous research such as "AutoResearch" has emerged, in which LLMs improve their own learning while humans sleep. Regarding the direction of AI development, it was explained that while improvements driven by the scaling laws of pre-training—previously the mainstream approach—are reaching a plateau, rapid progress is being made in enhancing training data quality through AI itself and improving reasoning capabilities via reinforcement learning with verifiable rewards. It was noted that gigawatt-class data centers and development teams of thousands are being utilized for training. Capital investment in U.S. AI data centers has surpassed 1% of GDP, and some predict it could expand to 6%, mirroring the railway investment boom of the 1880s. On the demand side, software development support accounts for more than half of current LLM demand, with the market size estimated to range from several trillion to tens of trillions of yen. OpenAI’s weekly active users have surpassed 900 million, with annualized revenue reaching $24 billion as of February 2026. Meanwhile, Anthropic has continued to grow at an annual rate of more than tenfold over the past three years, with its annualized revenue reaching $14 billion. An estimated 140 trillion tokens are processed daily in China, with ByteDance alone accounting for 30 trillion tokens and Google for 43 trillion. Mr. Okanohara positioned AI data centers as "token factories," presenting an estimate that one gigawatt of power could support a revenue potential of $50 billion (7.5 trillion yen), 23 trillion tokens per day, and the equivalent of one million GPUs. In its GTC 2026 keynote, NVIDIA projected the AI infrastructure market to reach $1 trillion, declaring a shift from the era of training to the era of inference. Regarding the situation in Japan, the analysis found that China lags several months behind the United States, while Japan and other countries are approximately one year behind. An analysis of the digital deficit structure estimated from OpenAI’s cost structure pointed out that AI semiconductors are the largest factor for capital outflow, with only 20% to 30% of the total currently remaining within the country. Risks associated with the use of closed models were cited, including Anthropic’s large-scale analysis of user requests, a proposal to mandate investment in the U.S. for countries purchasing AI semiconductors, and the temporary designation of Claude as a "supply chain risk" by the U.S. Department of Defense due to a conflict with Anthropic. Based on these considerations, Okanohara emphasized that the continued development of domestic generative AI is of critical importance for avoiding dependence on foreign suppliers, accumulating development know-how, ensuring secure closed-network operations for confidential information, eliminating bias, reducing the digital trade deficit, and protecting Japanese culture and social norms. Preferred Networks highlighted its initiatives, including PLaMo, a domestically developed foundation model with world-class Japanese language performance; Matlantis, a materials discovery simulator used by over 150 organizations; Kachaka Pro, which holds the top share (47.9%) of the domestic autonomous mobile robot (AMR) market; MN-Core L1000, a next-generation AI semiconductor featuring 3D-stacked DRAM scheduled for release in 2027; and a cybersecurity business in collaboration with GMO. Finally, addressing human work styles in the AI era, the presentation highlighted the importance of defining goals and constraints, managing data access permissions, decomposing tasks into appropriate levels of granularity, and continuous auditing and intervention, concluding that proper environment design (harness) and task management will be the key.

📝 学習のヒント

  1. 1まず英文を読む — 知らない単語にあたりをつけてから音声へ。
  2. 2一文ずつ確認 — 日本語訳と照合し、構文を理解する。
  3. 3通し読み Normal で耳を作る — 内容を追いながらリピート。
  4. 4Fast でシャドーイング — 口を慣らし、リスニング速度を上げる。
  5. 5翌日に復習 — 1日空けて再聴すると長期記憶に定着しやすい。
© つくもち英語部