


Остановите войну!
for scientists:


default search action
Ge Li 0001
Person information

- affiliation: Peking University, Key Laboratory of High Confidence Software Technologies, Bejing, China
Other persons with the same name
- Ge Li — disambiguation page
- Ge Li 0002
— Peking University Shenzhen Graduate School, School of Electronic and Computer Engineering, Shenzhen, China
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j14]Jia Li
, Ge Li
, Zhuo Li
, Zhi Jin
, Xing Hu
, Kechi Zhang
, Zhiyi Fu
:
CodeEditor: Learning to Edit Source Code with Pre-trained Models. ACM Trans. Softw. Eng. Methodol. 32(6): 143:1-143:22 (2023) - [c67]Jia Li, Yongmin Li, Ge Li, Zhi Jin, Yiyang Hao, Xing Hu:
SkCoder: A Sketch-based Approach for Automatic Code Generation. ICSE 2023: 2124-2135 - [c66]Yunfei Zhao
, Yihong Dong
, Ge Li
:
Seq2Seq or Seq2Tree: Generating Code Using Both Paradigms via Mutual Learning. Internetware 2023: 238-248 - [c65]Yihong Dong, Ge Li, Zhi Jin:
CODEP: Grammatical Seq2Seq Model for General-Purpose Code Generation. ISSTA 2023: 188-198 - [c64]Kechi Zhang, Zhuo Li, Zhi Jin, Ge Li:
Implant Global and Local Hierarchy Information to Sequence based Code Representation Models. ICPC 2023: 157-168 - [c63]Jia Li, Chongyang Tao, Zhi Jin, Fang Liu, Jia Li, Ge Li:
ZC3: Zero-Shot Cross-Language Code Clone Detection. ASE 2023: 875-887 - [c62]Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu:
Learning Program Representations with a Tree-Structured Transformer. SANER 2023: 248-259 - [i52]Yihong Dong, Jiazheng Ding, Xue Jiang, Zhuo Li, Ge Li, Zhi Jin:
CodeScore: Evaluating Code Generation by Learning Code Execution. CoRR abs/2301.09043 (2023) - [i51]Jia Li, Yongmin Li, Ge Li, Zhi Jin, Yiyang Hao, Xing Hu:
SkCoder: A Sketch-based Approach for Automatic Code Generation. CoRR abs/2302.06144 (2023) - [i50]Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, Ge Li:
Self-planning Code Generation with Large Language Model. CoRR abs/2303.06689 (2023) - [i49]Kechi Zhang, Zhuo Li, Zhi Jin, Ge Li:
Implant Global and Local Hierarchy Information to Sequence based Code Representation Models. CoRR abs/2303.07826 (2023) - [i48]Jia Li, Yunfei Zhao, Yongmin Li, Ge Li, Zhi Jin:
Towards Enhancing In-Context Learning for Code Generation. CoRR abs/2303.17780 (2023) - [i47]Yihong Dong, Xue Jiang, Zhi Jin, Ge Li:
Self-collaboration Code Generation via ChatGPT. CoRR abs/2304.07590 (2023) - [i46]Mingyang Geng, Shangwen Wang, Dezun Dong, Haotian Wang, Ge Li, Zhi Jin, Xiaoguang Mao, Xiangke Liao:
An Empirical Study on Using Large Language Models for Multi-Intent Comment Generation. CoRR abs/2304.11384 (2023) - [i45]Kechi Zhang, Ge Li, Jia Li, Zhuo Li, Zhi Jin:
ToolCoder: Teach Code Generation Models to use API search tools. CoRR abs/2305.04032 (2023) - [i44]Zhuo Li, Huangzhao Zhang, Zhi Jin, Ge Li:
WELL: Applying Bug Detectors to Bug Localization via Weakly Supervised Learning. CoRR abs/2305.17384 (2023) - [i43]Yihong Dong, Kangcheng Luo, Xue Jiang, Zhi Jin, Ge Li:
PACE: Improving Prompt with Actor-Critic Editing for Large Language Model. CoRR abs/2308.10088 (2023) - [i42]Jia Li, Chongyang Tao, Zhi Jin, Fang Liu, Jia Allen Li, Ge Li:
ZC3: Zero-Shot Cross-Language Code Clone Detection. CoRR abs/2308.13754 (2023) - [i41]Jia Allen Li, Yongmin Li, Ge Li, Xing Hu, Xin Xia, Zhi Jin:
EditSum: A Retrieve-and-Edit Framework for Source Code Summarization. CoRR abs/2308.13775 (2023) - [i40]Yuqi Zhu, Jia Allen Li, Ge Li, Yunfei Zhao, Jia Li, Zhi Jin, Hong Mei:
Improving Code Generation by Dynamic Temperature Sampling. CoRR abs/2309.02772 (2023) - [i39]Jia Li, Ge Li, Chongyang Tao, Jia Li, Huangzhao Zhang, Fang Liu, Zhi Jin:
Large Language Model-Aware In-Context Learning for Code Generation. CoRR abs/2310.09748 (2023) - 2022
- [j13]Fang Liu
, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin:
A unified multi-task learning model for AST-level and token-level code completion. Empir. Softw. Eng. 27(4): 91 (2022) - [j12]Zhehao Zhao
, Bo Yang, Ge Li, Huai Liu
, Zhi Jin:
Precise Learning of Source Code Contextual Semantics via Hierarchical Dependence Structure and Graph Attention Networks. J. Syst. Softw. 184: 111108 (2022) - [j11]Huangzhao Zhang
, Zhiyi Fu
, Ge Li
, Lei Ma
, Zhehao Zhao
, Hua'an Yang, Yizhe Sun
, Yang Liu
, Zhi Jin
:
Towards Robustness of Deep Program Processing Models - Detection, Estimation, and Enhancement. ACM Trans. Softw. Eng. Methodol. 31(3): 50:1-50:40 (2022) - [j10]Hao Yu
, Xing Hu
, Ge Li
, Ying Li
, Qianxiang Wang
, Tao Xie
:
Assessing and Improving an Evaluation Dataset for Detecting Semantic Code Clones via Deep Learning. ACM Trans. Softw. Eng. Methodol. 31(4): 62:1-62:25 (2022) - [j9]Hui Liu
, Mingzhu Shen, Jiaqi Zhu, Nan Niu
, Ge Li, Lu Zhang
:
Deep Learning Based Program Generation From Requirements Text: Are We There Yet? IEEE Trans. Software Eng. 48(4): 1268-1289 (2022) - [c61]Jia Li, Yuyuan Zhao, Zhi Jin, Ge Li, Tao Shen
, Zhengwei Tao, Chongyang Tao:
SK2: Integrating Implicit Sentiment Knowledge and Explicit Syntax Knowledge for Aspect-Based Sentiment Analysis. CIKM 2022: 1114-1123 - [c60]Han Peng, Ge Li, Yunfei Zhao, Zhi Jin:
Rethinking Positional Encoding in Tree Transformer for Code Representation. EMNLP 2022: 3204-3214 - [c59]Hao Yu, Yiling Lou, Ke Sun, Dezhi Ran, Tao Xie, Dan Hao, Ying Li, Ge Li, Qianxiang Wang:
Automated Assertion Generation via Information Retrieval and Its Integration with Deep learning. ICSE 2022: 163-174 - [c58]Fang Liu, Ge Li, Zhiyi Fu, Shuai Lu, Yiyang Hao, Zhi Jin:
Learning to Recommend Method Names with Global Context. ICSE 2022: 1294-1306 - [c57]Kechi Zhang
, Wenhan Wang, Huangzhao Zhang, Ge Li, Zhi Jin:
Learning to represent programs with heterogeneous graphs. ICPC 2022: 378-389 - [c56]Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin:
Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively. NeurIPS 2022 - [c55]Sijie Shen, Xiang Zhu, Yihong Dong, Qizhi Guo, Yankun Zhen, Ge Li:
Incorporating domain knowledge through task augmentation for front-end JavaScript code generation. ESEC/SIGSOFT FSE 2022: 1533-1543 - [i38]Fang Liu, Ge Li, Zhiyi Fu, Shuai Lu, Yiyang Hao, Zhi Jin:
Learning to Recommend Method Names with Global Context. CoRR abs/2201.10705 (2022) - [i37]Fang Liu, Zhiyi Fu, Ge Li, Zhi Jin, Hui Liu, Yiyang Hao:
Non-autoregressive Model for Full-line Code Completion. CoRR abs/2204.09877 (2022) - [i36]Yiyang Hao, Ge Li, Yongqiang Liu, Xiaowei Miao, He Zong, Siyuan Jiang, Yang Liu, He Wei:
AixBench: A Code Generation Benchmark Dataset. CoRR abs/2206.13179 (2022) - [i35]Kechi Zhang
, Ge Li, Zhi Jin:
What does Transformer learn about source code? CoRR abs/2207.08466 (2022) - [i34]Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Zhi Jin, Yang Liu
:
A Tree-structured Transformer for Program Representation Learning. CoRR abs/2208.08643 (2022) - [i33]Yihong Dong, Ge Li, Zhi Jin:
Antecedent Predictions Are Dominant for Tree-Based Code Generation. CoRR abs/2208.09998 (2022) - [i32]Sijie Shen, Xiang Zhu, Yihong Dong, Qizhi Guo, Yankun Zhen, Ge Li:
Incorporating Domain Knowledge through Task Augmentation for Front-End JavaScript Code Generation. CoRR abs/2208.10091 (2022) - [i31]Jia Li, Zhuo Li, Huangzhao Zhang, Ge Li, Zhi Jin, Xing Hu, Xin Xia:
Poison Attack and Defense on Deep Source Code Processing Models. CoRR abs/2210.17029 (2022) - [i30]Jia Li, Ge Li, Zhuo Li, Zhi Jin, Xing Hu, Kechi Zhang, Zhiyi Fu:
CodeEditor: Learning to Edit Source Code with Pre-trained Models. CoRR abs/2210.17040 (2022) - [i29]Yihong Dong, Ge Li, Zhi Jin:
CODEP: Grammatical Seq2Seq Model for General-Purpose Code Generation. CoRR abs/2211.00818 (2022) - [i28]Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin:
Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively. CoRR abs/2211.01642 (2022) - 2021
- [j8]Jingxuan Zhang, He Jiang, Shuai Lu, Ge Li, Xin Chen:
DeepDir: a deep learning approach for API directive detection. Sci. China Inf. Sci. 64(9) (2021) - [c54]Jia Li, Yongmin Li, Ge Li, Xing Hu, Xin Xia, Zhi Jin:
EditSum: A Retrieve-and-Edit Framework for Source Code Summarization. ASE 2021: 155-166 - [c53]Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu:
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation. NeurIPS Datasets and Benchmarks 2021 - [c52]Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin:
Integrating Tree Path in Transformer for Code Representation. NeurIPS 2021: 9343-9354 - [i27]Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu:
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation. CoRR abs/2102.04664 (2021) - [i26]Wenhan Wang, Ge Li, Sijie Shen, Xin Xia, Zhi Jin:
Modular Tree Network for Source Code Representation Learning. CoRR abs/2104.00196 (2021) - [i25]Zhehao Zhao, Bo Yang, Ge Li, Huai Liu, Zhi Jin:
Precise Learning of Source Code Contextual Semantics via Hierarchical Dependence Structure and Graph Attention Networks. CoRR abs/2111.11435 (2021) - 2020
- [j7]Guangjie Li, Hui Liu, Ge Li, Sijie Shen, Hanlin Tang:
LSTM-based argument recommendation for non-API methods. Sci. China Inf. Sci. 63(9): 1-22 (2020) - [j6]Xing Hu
, Ge Li, Xin Xia, David Lo
, Zhi Jin:
Deep code comment generation with hybrid lexical and syntactical information. Empir. Softw. Eng. 25(3): 2179-2217 (2020) - [j5]Wenhan Wang, Ge Li, Sijie Shen, Xin Xia
, Zhi Jin:
Modular Tree Network for Source Code Representation Learning. ACM Trans. Softw. Eng. Methodol. 29(4): 31:1-31:23 (2020) - [c51]Huangzhao Zhang
, Zhuo Li, Ge Li, Lei Ma, Yang Liu, Zhi Jin:
Generating Adversarial Examples for Holding Robustness of Source Code Processing Models. AAAI 2020: 1169-1176 - [c50]Wenjie Zhang
, Zeyu Sun, Qihao Zhu, Ge Li, Shaowei Cai
, Yingfei Xiong, Lu Zhang:
NLocalSAT: Boosting Local Search with Solution Prediction. IJCAI 2020: 1177-1183 - [c49]Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin:
A Self-Attentional Neural Architecture for Code Completion with Multi-Task Learning. ICPC 2020: 37-47 - [c48]Bolin Wei, Yongmin Li, Ge Li, Xin Xia, Zhi Jin:
Retrieve and Refine: Exemplar-based Neural Comment Generation. ASE 2020: 349-360 - [c47]Fang Liu, Ge Li, Yunfei Zhao, Zhi Jin:
Multi-task Learning based Pre-trained Language Model for Code Completion. ASE 2020: 473-485 - [c46]Boao Li, Meng Yan
, Xin Xia, Xing Hu, Ge Li, David Lo
:
DeepCommenter: a deep code comment generation tool with hybrid lexical and syntactical information. ESEC/SIGSOFT FSE 2020: 1571-1575 - [c45]Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin:
Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree. SANER 2020: 261-271 - [i24]Wenjie Zhang, Zeyu Sun, Qihao Zhu, Ge Li, Shaowei Cai, Yingfei Xiong, Lu Zhang:
NLocalSAT: Boosting Local Search with Solution Prediction. CoRR abs/2001.09398 (2020) - [i23]Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin:
Detecting Code Clones with Graph Neural Networkand Flow-Augmented Abstract Syntax Tree. CoRR abs/2002.08653 (2020) - [i22]Wenhan Wang, Sijie Shen, Ge Li, Zhi Jin:
Towards Full-line Code Completion with Neural Language Models. CoRR abs/2009.08603 (2020) - [i21]Bolin Wei, Yongmin Li, Ge Li, Xin Xia, Zhi Jin:
Retrieve and Refine: Exemplar-based Neural Comment Generation. CoRR abs/2010.04459 (2020) - [i20]Wenhan Wang, Kechi Zhang, Ge Li, Zhi Jin:
Learning to Represent Programs with Heterogeneous Graphs. CoRR abs/2012.04188 (2020) - [i19]Fang Liu, Ge Li, Yunfei Zhao, Zhi Jin:
Multi-task Learning based Pre-trained Language Model for Code Completion. CoRR abs/2012.14631 (2020)
2010 – 2019
- 2019
- [c44]Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang:
A Grammar-Based Structural CNN Decoder for Code Generation. AAAI 2019: 7055-7062 - [c43]Xing Hu, Rui Men, Ge Li, Zhi Jin:
Deep-AutoCoder: Learning to Complete Code Precisely with Induced Code Tokens. COMPSAC (1) 2019: 159-168 - [c42]Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin:
Why Do Neural Dialog Systems Generate Short and Meaningless Replies? a Comparison between Dialog and Translation. ICASSP 2019: 7290-7294 - [c41]Hao Yu, Wing Lam, Long Chen, Ge Li, Tao Xie, Qianxiang Wang:
Neural detection of semantic code clones via tree-based convolution. ICPC 2019: 70-80 - [c40]Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, Zhi Jin:
Code Generation as a Dual Task of Code Summarization. NeurIPS 2019: 6559-6569 - [i18]Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, Zhi Jin:
Code Generation as a Dual Task of Code Summarization. CoRR abs/1910.05923 (2019) - 2018
- [j4]Tao Xie, He Jiang, Ge Li, Tianyu Wo, Rahul Pandita, Chang Xu, Lihua Xu:
Preface. J. Comput. Sci. Technol. 33(5): 873-875 (2018) - [c39]Xing Hu, Ge Li, Xin Xia, David Lo
, Shuai Lu, Zhi Jin:
Summarizing Source Code with Transferred API Knowledge. IJCAI 2018: 2269-2275 - [c38]Xiaochen Li, He Jiang, Dong Liu, Zhilei Ren, Ge Li:
Unsupervised deep bug report summarization. ICPC 2018: 144-155 - [c37]Xing Hu, Ge Li, Xin Xia
, David Lo
, Zhi Jin:
Deep code comment generation. ICPC 2018: 200-210 - [i17]Xiaochen Li, He Jiang, Zhilei Ren, Ge Li, Jingxuan Zhang:
Deep Learning in Software Engineering. CoRR abs/1805.04825 (2018) - [i16]Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang:
A Grammar-Based Structural CNN Decoder for Code Generation. CoRR abs/1811.06837 (2018) - 2017
- [c36]Yunchuan Chen, Ge Li, Zhi Jin:
Learning Sparse Overcomplete Word Vectors Without Intermediate Dense Representations. KSEM 2017: 3-15 - [c35]Wenhao Huang, Ge Li, Zhi Jin:
Improved Knowledge Base Completion by the Path-Augmented TransR Model. KSEM 2017: 149-159 - [c34]Yangyang Lu, Ge Li, Zelong Zhao, Linfeng Wen, Zhi Jin:
Learning to Infer API Mappings from API Documents. KSEM 2017: 237-248 - [i15]Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin:
Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A Comparison between Dialog and Translation. CoRR abs/1712.02250 (2017) - 2016
- [j3]Qiang Wei, Zhi Jin, Lixing Li, Ge Li:
Lightweight semantic service modelling for IoT: an environment-based approach. Int. J. Embed. Syst. 8(2/3): 164-173 (2016) - [c33]Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin:
Convolutional Neural Networks over Tree Structures for Programming Language Processing. AAAI 2016: 1287-1293 - [c32]Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin:
Compressing Neural Language Models by Sparse Word Representations. ACL (1) 2016 - [c31]Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin:
Natural Language Inference by Tree-Based Convolution and Heuristic Matching. ACL (2) 2016 - [c30]Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin:
Distilling Word Embeddings: An Encoding Approach. CIKM 2016: 1977-1980 - [c29]Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin:
Improved relation classification by deep recurrent neural networks with data augmentation. COLING 2016: 1461-1470 - [c28]Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin:
Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. COLING 2016: 3349-3358 - [c27]Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin:
How Transferable are Neural Networks in NLP Applications? EMNLP 2016: 479-489 - [c26]Zhao Meng, Lili Mou, Ge Li, Zhi Jin:
Context-Aware Tree-Based Convolutional Neural Networks for Natural Language Inference. KSEM 2016: 515-526 - [c25]Yangyang Lu, Ge Li, Rui Miao, Zhi Jin:
Learning Embeddings of API Tokens to Facilitate Deep Learning Based Program Processing. KSEM 2016: 527-539 - [i14]Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin:
Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation. CoRR abs/1601.03651 (2016) - [i13]Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin:
How Transferable are Neural Networks in NLP Applications? CoRR abs/1603.06111 (2016) - [i12]Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin:
Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. CoRR abs/1607.00970 (2016) - [i11]Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin:
Compressing Neural Language Models by Sparse Word Representations. CoRR abs/1610.03950 (2016) - [i10]Wenhao Huang, Ge Li, Zhi Jin:
Improved Knowledge Base Completion by Path-Augmented TransR Model. CoRR abs/1610.04073 (2016) - 2015
- [c24]Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin:
Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Paths. EMNLP 2015: 1785-1794 - [c23]Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin:
A Comparative Study on Regularization Strategies for Embedding-based Neural Networks. EMNLP 2015: 2106-2111 - [c22]Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin:
Discriminative Neural Sentence Modeling by Tree-Based Convolution. EMNLP 2015: 2315-2325 - [c21]Hao Peng, Lili Mou, Ge Li, Yuxuan Liu, Lu Zhang, Zhi Jin:
Building Program Vector Representations for Deep Learning. KSEM 2015: 547-553 - [i9]Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin:
Tree-based Convolution: A New Neural Architecture for Sentence Modeling. CoRR abs/1504.01106 (2015) - [i8]Lili Mou, Ge Li, Yan Xu, Lu Zhang, Zhi Jin:
Distilling Word Embeddings: An Encoding Approach. CoRR abs/1506.04488 (2015) - [i7]Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin:
Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path. CoRR abs/1508.03720 (2015) - [i6]Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin:
A Comparative Study on Regularization Strategies for Embedding-based Neural Networks. CoRR abs/1508.03721 (2015) - [i5]Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin:
On End-to-End Program Generation from User Intention by Deep Neural Networks. CoRR abs/1510.07211 (2015) - [i4]Lili Mou, Rui Yan, Ge Li, Lu Zhang, Zhi Jin:
Backbone Language Modeling for Constrained Natural Language Generation. CoRR abs/1512.06612 (2015) - [i3]Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin:
Recognizing Entailment and Contradiction by Tree-based Convolution. CoRR abs/1512.08422 (2015) - 2014
- [j2]Yan Xu, Ge Li, Lili Mou, Yangyang Lu:
Learning Non-Taxonomic Relations on Demand for Ontology Extension. Int. J. Softw. Eng. Knowl. Eng. 24(8): 1159-1176 (2014) - [c20]Lili Mou, Ge Li, Zhi Jin, Lu Zhang:
Verification Based on Hyponymy Hierarchical Characteristics for Web-Based Hyponymy Discovery. KSEM 2014: 81-92 - [i2]