Archives【アーカイブ】

2025 Jan.25
News Release

Presentation at the Event for Female Middle and High School Students: “What Should Middle and High School Students Do for Their Future in the AI Era?” / 女子中高生対象イベント「AI時代、中高生は将来に向けて何をすればいいのか?」登壇

On Saturday, January 25, 2025, at the event for female middle and high school students titled “What Should Middle and High School Students Do for Their Future in the AI Era?“, Assistant Professor Reina Akama gave a presentation, “What Do Researchers in AI, Mathematics, and Informatics Study?”, and participated in a panel discussion on “What Should Junior and Senior High School Students Do for Their Future in the AI Era?”.

2025年1月25日(土) に行われた女子中高生対象イベント「AI時代、中高生は将来に向けて何をすればいいのか?」にて、赤間怜奈助教が研究者講演「AIや数理・情報の研究者は何を研究しているの?」および、パネルディスカッション「AI時代、中高校生は将来に向けて何をすべきか」に登壇しました。

open
2025 Jan.23
News Release

Acceptance to ICLR 2025 / ICLR 2025 採択

The following papers have been accepted to the Thirteenth International Conference on Learning Representations (ICLR 2025).

ICLR 2025 に以下の論文が採択されました。

  • Makoto Shing, Kou Misaki, Han Bao, Sho Yokoi, Takuya Akiba.
    “TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models” (Spotlight)
  • Hiroyuki Deguchi , Go Kamoda, Yusuke Matsushita, Chihiro Taguchi, Kohei Suenaga, Masaki Waga, Sho Yokoi.
    “SoftMatcha: A Soft and Fast Pattern Matcher for Billion-Scale Corpus Searches”
  • Taishi Nakamura, Takuya Akiba, Kazuki Fujii, Yusuke Oda, Rio Yokota, Jun Suzuki.
    “Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization”
  • Yui Oka, Taku Hasegawa, Kyosuke Nishida, Kuniko Saito.
    “Wavelet-based Positional Representation for Long Context”
  • Itsumi Saito, Haruto Yoshida, Keisuke Sakaguchi.
    “Sketch2Diagram: Generating Vector Diagrams from Hand-Drawn Sketches”
open
2025 Jan.23
News Release

Acceptance to NAACL 2025 / NAACL 2025 採択

The following papers have been accepted to the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025).

NAACL 2025 に以下の論文が採択されました。

Main Conference

  • Jaehyeok Lee, Keisuke Sakaguchi, JinYeong Bak.
    “Self-Training Meets Consistency: Improving LLMs’ Reasoning With Consistency-Driven Rationale Evaluation”
  • Dominic Sobhani, Ruiqi Zhong, Edison Marrese-Taylor, Keisuke Sakaguchi, Yutaka Matsuo.
    “Language Models can Categorize System Inputs for Performance Analysis”
  • Kazuki Yano, Takumi Ito, Jun Suzuki.
    “STEP: Staged Parameter-Efficient Pre-training for Large Language Models”
  • Ahmed Oumar El-Shangiti, Tatsuya Hiraoka, Hilal AlQuabeh, Benjamin Heinzerling, Kentaro Inui.
    “The Geometry of Numerical Reasoning: Language Models Compare Numeric Properties in Linear Subspaces”
  • Tatsuya Hiraoka, Kentaro Inui.
    “Repetition Neurons: How Do Language Models Produce Repetitions?”

Findings

  • Go Kamoda, Benjamin Heinzerling, Tatsuro Inaba, Keito Kudo, Keisuke Sakaguchi, Kentaro Inui.
    “Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference”
open
2025 Jan.21
News Release

Acceptance to WWW 2025 / WWW 2025 採択

The following paper has been accepted to the International World Wide Web Conference 2025 (WWW2025 | The Web Conf 2025).

WWW2025 に以下の論文が採択されました。

  • Dongyuan Li, Satoshi Kosugi, Ying Zhang, Manabu Okumura, Feng Xia, Renhe Jiang.
    “Revisiting Dynamic Graph Clustering via Matrix Factorization”
    open
    2024 Dec.17
    News Release

    Alumni Interview “MY DECISION” Updated / Alumni インタビュー「MY DECISION」更新

    The interview of Masatoshi Suzuki (Ph.D. acquired in 2021) is now available on “MY DECISION.”

    「MY DECISION」に、鈴木正敏さん(2021年博士後期課程修了)のインタビューを公開しました。

    open
    2024 Dec.16
    News Release

    Postdoctoral researcher Masaya Taniguchi appeared on the TOKYO FM official podcast “Engineer’s Paradise vim-jp Radio” / 東京FM公式ポッドキャスト「エンジニアの楽園 vim-jpラジオ」に谷口雅弥研究員が出演いたしました

    Postdoctoral researcher Masaya Taniguchi appeared on the TOKYO FM official podcast “Engineer’s Paradise vim-jp Radio.”

    東京FM公式ポッドキャスト「エンジニアの楽園 vim-jpラジオ」に谷口雅弥(研究員)が出演し、研究活動とOSS活動 (LISPとVim) について話しました。

    配信ページはこちら:https://audee.jp/voice/show/95400

    open
    2024 Dec.16
    News Release

    Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) / NeurIPS 2024 発表

    The following work was presented at the Thirty-Eighth Annual Conference on Neural Information Processing Systems, held in Vancouver from December 10th to 15th.

    12月10日から15日にかけてバンクーバーで開催された NeurIPS 2024 にて、下記の発表を行いました。

    • Sho Yokoi, Han Bao, Hiroto Kurita and Hidetoshi Shimodaira.
      “Zipfian Whitening”
    open
    2024 Dec.02
    News Release

    Acceptance to COLING 2025 / COLING 2025 採択

    The following papers have been accepted to The 31st International Conference on Computational Linguistics (COLING 2025).

    COLING 2025 に下記の論文が採択されました。

    • Yunmeng Li, Jun Suzuki, Makoto Morishita, Kaori Abe and Kentaro Inui.
      “MQM-Chat: Multidimensional Quality Metrics for Chat Translation”
    • Go Kamoda, Akari Asai, Ana Brassard and Keisuke Sakaguchi.
      “Quantifying the Influence of Evaluation Aspects on Long-Form Response Assessment”
    • Daiki Shiono, Ana Brassard, Yukiko Ishizuki and Jun Suzuki.
      “Evaluating Model Alignment with Human Perception: A Study on Shitsukan in LLMs and LVLMs”
    • Asahi Hentona, Jun Baba, Shiki Sato and Reina Akama
      “User Willingness-aware Sales Talk Dataset”
    open
    2024 Dec.02
    News Release

    Acceptance to COLING 2025 Industry Track / COLING 2025 Industry Track 採択

    The following paper has been accepted to the Industry Track at The 31st International Conference on Computational Linguistics (COLING 2025).

    COLING 2025 Industry Track に下記の論文が採択されました。

    • Toshiki Kuramoto and Jun Suzuki.
      “Predicting Fine-tuned Performance on Larger Datasets Before Creating Them”
    open
    2024 Nov.28
    News Release

    Lecture at ASCONE2024 / 日本神経回路学会 オータムスクール(ASCONE2024)『脳・理解・計算』にて講義を行いました

    Assistant Professor Sho Yokoi gave a lecture entitled “The understanding of understanding—a perspective from representation learning of natural language” at Autumn School for Computational Neuroscience (ASCONE) 2024, held on November 25 to 28.

    横井祥助教が11/25から11/28に開催された日本神経回路学会オータムスクール(ASCONE2024)『脳・理解・計算』にて「言語の表現学習が問う『理解の理解』」という題で講義を行いました。

    open
    2024 Nov.22
    News Release

    1st and 2nd place in WMT2024 / WMT2024 で1位と2位を獲得

    The joint team from Tohoku University, RIKEN, NAIST, Future Corporation, and Langsmith Inc. achieved 1st place in Japanese-Chinese translation and 2nd place in English-Japanese translation in the WMT2024 Shared Task: General Machine Translation (constrained track). Congratulations!

    WMT2024 Shared Task: General Machine Translation (constrained track) にて、東北大学、理研、フューチャー、Langsmith の合同チームが日中翻訳で1位、英日翻訳で2位を獲得しました。おめでとうございます!

    • Keito Kudo*, Hiroyuki Deguchi*, Makoto Morishita*, Ryo Fujii*, Takumi Ito*, Shintaro Ozaki*, Koki Natsumi, Kai Sato, Kazuki Yano, Ryosuke Takahashi, Subaru Kimura, Tomomasa Hara, Yusuke Sakai and Jun Suzuki (*equal contributions).
      “Document-level Translation with LLM Reranking: Team-J at WMT 2024 General Translation Task”
    open
    2024 Nov.22
    News Release

    The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)

    The following papers were presented at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), held in Miami from November 12th to 16th.

    11月12日から16日にかけてマイアミで開催された EMNLP 2024 にて、下記の発表を行いました。

    Main Conference

    • Qin Dai, Benjamin Heinzerling and Kentaro Inui.
      “Low-rank Subspace for Binding in Large Language Models”
    • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi and Kentaro Inui.
      “First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning”
    • Irfan Robbani, Paul Reisert, Surawat Pothong, Naoya Inoue, Camélia Guerraoui, Wenzhi Wang, Shoichi Naito, Jungmin Choi and Kentaro Inui.
      “Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling”

    Findings

    • Shoichi Naito*, Wenzhi Wang*, Paul Reisert, Naoya Inoue, Camélia Guerraoui, Kenshi Yamaguchi, Jungmin Choi, Irfan Robbani, Surawat Pothong and Kentaro Inui (*equal contribution).
      “Designing Logic Pattern Templates for Counter-Argument Logical Structure Analysis”
    • Dongyuan Li, Ying Zhang, Zhen Wang, Shiyin Tan, Satoshi Kosugi and Manabu Okumura.
      “Active Learning for Abstractive Text Summarization via LLM-Determined Curriculum and Certainty Gain Maximization”
    open
    2024 Nov.21
    News Release

    Talk at NLP Colloquium / NLPコロキウムにてトークを行いました

    Assistant Professor Sho Yokoi gave a talk at NLP Colloquium held on November 20th.

    11月20日に開催されたNLPコロキウムにて、横井祥助教が『Zipf白色化:タイプとトークンの区別がもたらす良質な埋め込み空間と損失関数』のタイトルでトークを行いました。

    open
    2024 Nov.07
    News Release

    Best Presentation Award at the 27th Information-Based Induction Sciences Workshop (IBIS2024) / 第27回情報論的学習理論ワークショップ (IBIS2024) 最優秀プレゼンテーション賞受賞

    The following presentation received the Best Presentation Award at the 27th Workshop on Information-Based Learning Theory (IBIS2024), held November 4-7, 2024. Congratulations!

    11月4日から11月7日にかけて開催された第27回情報論的学習理論ワークショップ (IBIS2024) にて、下記の発表が最優秀プレゼンテーション賞を受賞しました。おめでとうございます!

    • 横井祥 (東北大, 理研), 包含 (京都大), 栗田宙人 (東北大), 下平英寿 (京都大, 理研)
      “Zipf 白色化”
    open
    2024 Nov.07
    News Release

    The 27th Information-Based Induction Sciences Workshop (IBIS2024) / 第27回情報論的学習理論ワークショップ (IBIS2024)

    The following works have been presented at the 27th Information-Based Induction Sciences Workshop (IBIS2024) held from November 4 to 7.

    11月4日から11月7日にかけて開催された第27回情報論的学習理論ワークショップ (IBIS2024) にて、下記の発表をおこないました。

    • 鴨田豪*, 伊藤郁海*, 熊谷雄介, 横井祥 (*equal contributions) 
      “文脈内学習設定における言語モデルの出力較正”
    • 横井 祥, 包 含, 栗田 宙人, 下平 英寿
      “Zipf 白色化”
    • 都地 悠馬, 高橋 惇, 横井 祥, Vwani Roychowdhury, 宮原 英之
      “長距離相互作用する文脈依存言語における相転移現象—言語モデルの創発現象を統計力学の視点で理解する—”

    In addition, Assistant Professor Sho Yokoi served as a facilitator in a panel discussion.

    また、横井祥助教がパネルディスカッション「社会が求める機械学習」にファシリテーターとして登壇しました。

    open
    2024 Oct.31
    News Release

    Lecture at Large Language Models 2024 Course / 講座 大規模言語モデル 2024 にて講義を行いました

    Ph.D. student Goro Kobayashi gave a lecture in the 10th session, titled “Analysis and Theory of LLMs,” as part of the Large Language Models 2024 course, hosted by the Matsuo-Iwasawa Laboratory at the University of Tokyo on October 30.

    小林悟郎(博士3年)が10月30日に東京大学松尾・岩澤研究室が主催する講座 大規模言語モデル 2024 の第10回「LLMの分析と理論」にて講義を行いました。

    open
    2024 Oct.21
    News Release

    The 30th Anniversary Symposium of the Association for Natural Language Processing / 言語処理学会30周年記念シンポジウム

    Professor Kentaro Inui participated in a panel discussion “Challenges and Future of Natural Language processing” at the 30th Anniversary Symposium of the Association for Natural Language Processing. Additionally, Professor Inui has been appointed as a fellow of the association. Congratulations!

    乾健太郎教授が言語処理学会30周年記念シンポジウムにて開催されたパネルディスカッション「言語処理の課題と未来」に登壇しました。また、同学会より「言語処理学会フェロー」に認定されました。おめでとうございます!

    open
    2024 Oct.16
    News Release

    Graduation Ceremony / 学位記授与式

    Congratulations on your graduation. We wish you every success in the future!

    ご修了おめでとうございます。これからの益々のご活躍をお祈り申し上げます。

    open
    2024 Oct.11
    News Release

    Press release / プレスリリース公開 “フューチャー、国内生成AIの開発力強化プロジェクト「GENIAC」公募に採択”

    A press release for our work has been published.

    フューチャー株式会社が GENIAC 公募に採択されたことに関するプレスリリースが公開されました。本採択事業は鈴木潤教授が共同チームで開発を進めるものです。

    open
    2024 Oct.11
    News Release

    Acceptance to Machine Learning and Compression Workshop @ NeurIPS 2024 / Machine Learning and Compression Workshop @ NeurIPS 2024 採択

    The following paper has been accepted to the Machine Learning and Compression Workshop @ NeurIPS 2024.

    Machine Learning and Compression Workshop @ NeurIPS 2024 に下記の論文が採択されました。

    • Makoto Shing, Kou Misaki, Han Bao, Sho Yokoi, Takuya Akiba.
      “TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models”
    open
    View More News

    これ以上は記事がありません

    これ以上は記事がありません

    Recent Changes
    Close