← 一覧へ

回答者 R0037

idR0037
ID37
開始時刻2026-02-10 13:42:35
完了時刻2026-02-10 13:49:37
メールxie.chun.gu@u.tsukuba.ac.jp
名前謝 淳
言語日本語
最終変更時刻
氏名 Name謝淳
所属(教員組織)※学生は指導教員の所属を選択 /Affiliation (Faculty/Organization)計算科学研究センター/Center for Computational Sciences
専門分野/ Research FieldConputer Vision
以下の項目から選んでください。 Please select from the options below.教員・研究員/ Faculty Member / Researcher
職位 Position助教
(大学院生のみ回答)指導教員名/ Name of Academic Supervisor (for graduate students only)
(大学院生のみ回答)本アンケートへ研究情報や研究データ情報を入力することを指導教員に確認しましたか? Have you confirmed with your academic supervisor that you are entering research information and/or research data information in this questionnaire?
2-1研究へのAIの活用経験と意識(当てはまるものを選んでください)/Experience with and Perceptions of AI Utilization in ResearchAIそのものやAIの高度化を研究している/I conduct research on AI itself or on the advancement of AI technologies.
3-1 AI を活用することで推進したい(推進した)研究テーマを回答ください。(1テーマ40字程度)Please describe the research theme(s) you would like to promote (or have promoted) by utilizing AI.
3-2 AIを活用することで解決したい(解決した)学術的課題の概要を教えてください(分野外の専門家がわかるように。1テーマ100〜300字程度)Please provide an overview of the academic challenge(s) you would like to address (or have addressed) by utilizing AI.
3-3 以下内容がわかる場合は具体的に教えてください。1.研究テーマで AI が特に有効または改善ができる部分はどこ(何)でしょうか?2.AIを活用することによって、研究分野にどのようなインパクトをあたえられるでしょうか?If possible, please provide specific details on the following points: 1Which part(s) ...
3-4 現時点で AI for Science チャレンジ型に応募したいと思いますか?At this point, would you like to apply for the AI for Science Challenge–type program?
3-5 ご自身の研究活動にAIを導入・活用するときの課題があれば教えてください。支援構築の参考にします。(複数選択可)If you have any challenges or concerns regarding the introduction or use of AI in your own research activities, please let us know.
3-6 上記テーマのためのデータは既に取得済みですか? Have the data for the above research theme already been collected?
4-1 今後AI活用したい(これまでAI活用した)研究データについて取得先や収集手法について可能な範囲で教えてください。 Please describe, to the extent possible, the sources and collection methods of the research data that you would like to use (or have us...
4-2 以下を参考にデータの種類等を可能な範囲で教えてください。 観測データ / 実験データ / 測定データ 画像 / 音声 / 動画 / テキスト / 時系列 数値シミュレーション結果 センサーデータ 行動データ / 社会調査・アンケートデータ (横断的にデータ取得/縦断的:一人物に複数点) 文献データ / デジタルアーカイブ その他 Please descr...
4-3 その他、保有している研究データについて、取得先や取得手法について可能な範囲で教えてください。 In addition to the above, please describe, to the extent possible, the sources and collection methods of any other research data you possess.
4-4 下記を参考に、データの種類等を可能な範囲で教えてください。 観測データ / 実験データ / 測定データ 画像 / 音声 / 動画 / テキスト / 時系列 数値シミュレーション結果 センサーデータ 行動データ / 社会調査・アンケートデータ (横断的にデータ取得/縦断的:一人物に複数点) 文献データ / デジタルアーカイブ その他 Please descr...
4-5 データの基本情報(サンプル数、説明変数、目的変数)について教えてください。Please provide basic information about the data, such as the number of samples, explanatory variables (independent variables), and target variables (depend...
4-6 これまでデータへ適応した解析手法や統計手法・モデル等があれば教えてください。Please describe any analytical methods, statistical techniques, or models that have been applied to the data so far, if applicable.
4-7 データの信頼性について次の点を記述してください。例)・測定精度/誤差範囲/ノイズがあればその特徴、もしくは、専門家によるアノテーションが必要かどうか Please describe the reliability of the data, addressing the following points as appropriate. Examples: Measurement a...
4-8 データの偏りについて次の点を記述してください。例・属性の偏りがあればどんなものか/サンプルサイズの偏りがあればどんなものか/データの揺らぎがあればどんなものか Please describe any biases present in the data, addressing the following points as appropriate. Examples: Types...
4-9 データの構造・複雑性について以下から教えてください。(複数選択可) Please indicate the structure and complexity of the data by selecting from the options below. (Multiple answers allowed)
5-1 専門分野(複数選択可)Classification of AI Research機械学習 Machine Learning;言語メディア処理  Language and Media Processing;AI応用 AI Applications;
5-2 現在の主要研究テーマをご記入ください。 (1テーマにつき100〜300字程度) Please describe your current main research theme(s).1. In-Context 3D Craniofacial Shape Completion for Orthognathic Surgical Planning Facial-driven orthognathic surgical planning requires a patient-specific reference facial appearance that represents the intended postoperative outcome. A clinically important subproblem in this setting is upper-face to lower-face prediction: inferring a normal and anatomically coherent lower facial geometry from an intact upper facial region. This task is challenging due to the nonlinear, region-dependent relationship between facial anatomy and skeletal structure, and is not well addressed by existing approaches that primarily focus on reference bone model estimation or rely on linear statistical shape models. In this work, we propose a retrieval-guided in-context learning framework for 3D craniofacial shape completion. Our method treats complete facial shapes from a population as exemplars and predicts the missing lower face of a query subject by reasoning over these examples. A self-supervised retrieval encoder first selects anatomically relevant support examples based on upper-face geometry. A transformer-based in-context completion network then jointly processes the retrieved exemplars and the query upper face, decomposing each support shape into upper- and lower-face tokens and synthesizing the corresponding lower-face geometry for the query. Unlike existing 3D in-context learning methods that rely on joint sampling or overlapping geometry, our approach enables completion across anatomically disjoint regions. Experiments on craniofacial datasets demonstrate that the proposed method produces realistic, anatomically consistent lower-face predictions and outperforms statistical and deep learning baselines in both geometric accuracy and clinical relevance. Our framework introduces a new paradigm for exemplar-based facial prediction and provides a flexible foundation for face-driven and soft-tissue–aware surgical planning. 2. Training-Free Style Transfer with Position-Bias Removal and Semantic-Guided Attention Recent diffusion-based style transfer methods have achieved notable progress, with StyleID attracting wide attention for its training-free attention-based design. However, directly injecting style features into self-attention introduces implicit positional bias, where content tokens tend to over-attend to spatially adjacent style regions, preventing proper global correspondence matching. Moreover, without semantic guidance, neighboring content pixels that belong to the same region may attend to different style areas, producing fragmented textures and disrupting the coherence of region-level appearance. We propose StyleID++, a semantic-aware enhancement framework that addresses these two issues through two simple yet effective components. A circular padding strategy eliminates positional bias and restores the attention mechanism’s global receptive field, while self-guided attention refinement steers each content query by incorporating the attention patterns of its semantically similar neighbors, ensuring that style selection is supported by consistent regional semantics. This leads to cleaner and more coherent stylization. Extensive experiments show that StyleID++ improves style consistency and visual fidelity while remaining fully training-free, outperforming prior methods.