Yanru Qu

Yanru Qu

Ph.D. in Computer Science, University of Illinois, Urbana-Champaign

Email: kevinqu16 AT gmail.com, yanruqu2 AT illinois.edu


[Publications] [Projects] [Awards] [Education] [Internships]
[Miscellaneous]

Hi! I am a first-year Ph.D. student in Prof. Jiawei Han's Data Mining Group, Department of Computer Science, University of Illinois, Urbana-Champaign. This year, I received my master degree at Shanghai Jiao Tong University. During my graduate study, I took research internships at ByteDance AI Lab, advised by Dr. Hao Zhou and Dr. Lei Li, and MILA, advised by Prof. Jian Tang and Prof. Jianyun Nie. Before that, I have worked with Prof. Jun Wang (University College London) remotely on computational advertising. I work on machine learning and data mining in APEX Lab, and my advisors are Prof. Weinan Zhang and Prof. Yong Yu.

My research interests lie in the general area of machine learning and data mining, especially their applications in information system, knowledge graph, and natural language, with a wish to push the limit of user and content understanding, as well as build more accessible and personalized intelligent systems for people.

Looking for summer internship in 2020! Download my CV

Publications

Selected Projects

Multi-hop Reasoning for Question Answering
dfgn Dec. 2018 - Feb. 2019
In this project, we study multi-hop reasoning for text-based question answering (QA). In recent years, QA has drawn much attention and achieved great success in simple scenarios. However, multi-hop reasoning over several distracted documents has only been preliminarily studied. In this project, we propose Dynamically Fused Graph Network (DFGN) to address this problem. The general idea of DFGN is extracting entity graphs from input documents, reasoning over the graphs and propogating node information to texts. Specifically, we stack several dynamic fusion layers to mimic human's step-by-step reasoning behavior. Our model achieves outstanding performance and yields interpretable reasoning chains. This paper is accepted by ACL'19 with oral representation.
[demo] [paper][code]

Knowledge-enhanced Neighborhood Interaction for Recommendation
dfgn June. 2018 - Nov. 2019
This work studies an “early summarization” issue of existing graph-based recommendation models, which only simply utilize user & itemrepresentations, while the more valuable local interactions among user-/item-neighbors are neglected. This work incorporates Knowledge graph to address sparsity and cold start, and proposes Neighborhood Interaction model to make full use of the local structures. The proposed framework achieves superior performance improvements in click-through rate prediction (1.1%-8.4% absolute AUC improve-ments) and outperforms by a wide margin in top-N recommendation, compared with the most advanced feature based, meta-path based, andgraph network based SOTAs. And this paper wins the best paper awardof DLP-KDD’19.
[paper][code]

Deep Recommender System for Huawei App Market
pnn Mar. 2017 - Mar. 2018
This is a joint research program of APEX Lab (SJTU) and Huawei Noah's Ark Lab with over CNY 1 million fundings. In this program, I serve as the program leader.
We propose a novel deep learning framework for recommendation, called Product-based Neural Networks (PNNs), to tackle the gradient issues of matrix factorization (MF)-based and DNN-based methods. We propose several product operators for PNN, as shown in the figure. The kernel product version (KPNN) defeats libFFM (the winning solution) in Criteo Display Advertising Challenge. The net-in-net version (PIN) is later deployed in Huawei App Market, achieving over 35% CTR improvement in online A/B test. Corresponding works are accepted by ACM Transactions on Information System, and are regarded as a high-value patent. Corresponding open source projects of this work get 300+ stars on Github.
[paper][data][code][code]

Transfer Learning for Named Entity Recognition
ladtl June. 2017 - Nov. 2017, APEX Lab
In this project, we study cross-specialty transfer learning for Named Entity Recognition (NER). Most of the previous works on NER transfer mainly focus on introducing domain-invariant constraints to LSTM. However, in real world, the conditional probability of entities may not be identical among different specialties, making these models not applicable to such cases. To solve this problem, we introduce a label-aware prior to LSTM, and an L2 constraint on CRF parameters. We prove the L2 distance in parameter space is equivalent to the KL-divergence in CRF output distributions. The work is accepted by NAACL with oral (6.73%) representation.
[paper][code]

Representation Learning for Domain Adaptation
wdgrl Mar. 2017 - Aug. 2017, APEX Lab
Most of the works in domain adaptation focus on learning domain-invariant representations, i.e., learning projection functions which map source/target domain input data to the same hidden space. One popular solution to domain adaptation is by adding a maximum mean discrepancy (MMD) constraint to the learned representations. Another solution is by learning a domain critic. In this work, we follow the idea of domain critic, and propose an adversarial representation learning approach, which uses Wasserstein distance to measure the domain discrepancy and plays a minimax game between the feature extractor and the domain critic. Besides, we provide a generalization bound guarantee and a gradient analysis.
[paper][code]

Selected Awards

Education

Internships

Miscellaneous

Last updated: 2019/10.
Keep Working.