Jiawei Wu

Ph.D. Student in Computer Science
UC Santa Barbara

Office: 3530 Phelps Hall
Email: jiawei_wu (at) cs.ucsb.edu

[Curriculum Vitæ]   [Google Scholar]


About Me

I am a first-year Ph.D. student in the Computer Science Department at University of California, Santa Barbara. I am advised by Professor William Y. Wang and affiliated with the UCSB NLP Group. My research interests lie on the intersection of natural language processing, reinforcement learning and deep learning. Before coming to UCSB, I received my B.Sc. advised by Professor Zhiyuan Liu from Tsinghua University.


News

[May 2018] I will serve as a PC member for EMNLP 2018.
[Mar 2018] I will serve as a PC member for COLING 2018.
[Feb 2018] One paper gets accpeted by CVPR 2018! Check the preprint version [here].
[Feb 2018] One paper gets accpeted by NAACL-HLT 2018! Check the camera ready version [here].
[Jan 2018] I will serve as a student co-chair for SoCal NLP 2018.
[Sep 2017] I will attend Socalml 2017 and NIPS 2017. See you there!
[Sep 2017] Start my exciting Ph.D. journey at UC Santa Barbara!


Research Hightlights [Full Publication List & Preprints]

Reinforced Co-Training

Co-training methods exploit predicted labels on the unlabeled data and select samples based on prediction confidence to augment the training. However, the sample selection in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled, and fails to explore data space. We propose a reinforcement learning method to select high-quality unlabeled samples to better co-train on.

Papers:
Reinforced Co-Training (NAACL-HLT 2018)

Video Captioning via Hierarchical Reinforcement Learning

Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still challenging to caption a video containing multiple fine-grained actions with a detailed description. We propose a novel hierarchical reinforcement learning framework for video captioning, where a high-level Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal.

Papers:
Video Captioning via Hierarchical Reinforcement Learning (CVPR 2018)


Misc.