default search action
Kee-Eung Kim
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j17]Kyungsik Lee, Hana Yoo, Sumin Shin, Wooyoung Kim, Yeonung Baek, Hyunjin Kang, Jaehyun Kim, Kee-Eung Kim:
A submodular optimization approach to trustworthy loan approval automation. AI Mag. 45(4): 502-513 (2024) - [c79]Sungyoon Kim, Yunseon Choi, Daiki E. Matsunaga, Kee-Eung Kim:
Stitching Sub-trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL. AAAI 2024: 13160-13167 - [c78]Kyungsik Lee, Hana Yoo, Sumin Shin, Wooyoung Kim, Yeonung Baek, Hyunjin Kang, Jaehyun Kim, Kee-Eung Kim:
A Submodular Optimization Approach to Accountable Loan Approval. AAAI 2024: 22761-22769 - [c77]Yunseon Choi, Sangmin Bae, Seonghyun Ban, Minchan Jeong, Chuheng Zhang, Lei Song, Li Zhao, Jiang Bian, Kee-Eung Kim:
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL. ACL (1) 2024: 8252-8271 - [c76]Oh Joon Kwon, Daiki E. Matsunaga, Kee-Eung Kim:
GDPO: Learning to Directly Align Language Models with Diversity Using GFlowNets. EMNLP 2024: 17120-17139 - [c75]Haanvid Lee, Tri Wahyu Guntara, Jongmin Lee, Yung-Kyun Noh, Kee-Eung Kim:
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies. ICLR 2024 - [c74]Yunseon Choi, Li Zhao, Chuheng Zhang, Lei Song, Jiang Bian, Kee-Eung Kim:
Diversification of Adaptive Policy for Effective Offline Reinforcement Learning. IJCAI 2024: 3863-3871 - [i24]Sungyoon Kim, Yunseon Choi, Daiki E. Matsunaga, Kee-Eung Kim:
Stitching Sub-Trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL. CoRR abs/2402.07226 (2024) - [i23]Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim:
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning. CoRR abs/2402.08594 (2024) - [i22]Haanvid Lee, Tri Wahyu Guntara, Jongmin Lee, Yung-Kyun Noh, Kee-Eung Kim:
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies. CoRR abs/2405.18792 (2024) - [i21]Youngjin Ahn, Jungwoo Park, Sangha Park, Jonghyun Choi, Kee-Eung Kim:
SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization. CoRR abs/2406.12233 (2024) - [i20]Yunseon Choi, Sangmin Bae, Seonghyun Ban, Minchan Jeong, Chuheng Zhang, Lei Song, Li Zhao, Jiang Bian, Kee-Eung Kim:
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL. CoRR abs/2407.14733 (2024) - [i19]Seongmin Lee, Jaewook Shin, Youngjin Ahn, Seokin Seo, Ohjoon Kwon, Kee-Eung Kim:
Zero-Shot Multi-Hop Question Answering via Monte-Carlo Tree Search with Large Language Models. CoRR abs/2409.19382 (2024) - [i18]Oh Joon Kwon, Daiki E. Matsunaga, Kee-Eung Kim:
GDPO: Learning to Directly Align Language Models with Diversity Using GFlowNets. CoRR abs/2410.15096 (2024) - 2023
- [j16]Mihye Kim, Jimyung Choi, Jaehyun Kim, Wooyoung Kim, Yeonung Baek, Gisuk Bang, Kwangwoon Son, Yeonman Ryou, Kee-Eung Kim:
Trustworthy residual vehicle value prediction for auto finance. AI Mag. 44(4): 394-405 (2023) - [c73]Mihye Kim, Jimyung Choi, Jaehyun Kim, Wooyoung Kim, Yeonung Baek, Gisuk Bang, Kwangwoon Son, Yeonman Ryou, Kee-Eung Kim:
Trustworthy Residual Vehicle Value Prediction for Auto Finance. AAAI 2023: 15537-15544 - [c72]Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim:
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning. EMNLP (Findings) 2023: 4942-4958 - [c71]HyeongJoo Hwang, Seokin Seo, Youngsoo Jang, Sungyoon Kim, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim:
Information-Theoretic State Space Model for Multi-View Reinforcement Learning. ICML 2023: 14249-14282 - [c70]Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, Kee-Eung Kim:
AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation. NeurIPS 2023 - [c69]Seokin Seo, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim:
Regularized Behavior Cloning for Blocking the Leakage of Past Action Information. NeurIPS 2023 - [i17]Jaeseok Yoon, Seunghyun Hwang, Ran Han, Jeonguk Bang, Kee-Eung Kim:
Adapting Text-based Dialogue State Tracker for Spoken Dialogues. CoRR abs/2308.15053 (2023) - [i16]Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, Kee-Eung Kim:
AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation. CoRR abs/2311.02194 (2023) - 2022
- [c68]Jongmin Lee, Cosmin Paduraru, Daniel J. Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, Arthur Guez:
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation. ICLR 2022 - [c67]Sunghoon Hong, Deunsol Yoon, Kee-Eung Kim:
Structure-Aware Transformer Policy for Inhomogeneous Multi-Task Reinforcement Learning. ICLR 2022 - [c66]Youngsoo Jang, Jongmin Lee, Kee-Eung Kim:
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems. ICLR 2022 - [c65]Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim:
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations. ICLR 2022 - [c64]Sanghoon Myung, In Huh, Wonik Jang, Jae Myung Choe, Jisu Ryu, Daesin Kim, Kee-Eung Kim, Changwook Jeong:
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning. ICML 2022: 16240-16252 - [c63]Jinhyeon Kim, Kee-Eung Kim:
Data Augmentation for Learning to Play in Text-Based Games. IJCAI 2022: 3143-3149 - [c62]Haeju Lee, Oh Joon Kwon, Yunseon Choi, Minho Park, Ran Han, Yoonhyung Kim, Jinhyeon Kim, Youngjune Lee, Haebin Shin, Kangwook Lee, Kee-Eung Kim:
Learning to Embed Multi-Modal Contexts for Situated Conversational Agents. NAACL-HLT (Findings) 2022: 813-830 - [c61]Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim:
LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation. NeurIPS 2022 - [c60]Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, Kee-Eung Kim:
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions. NeurIPS 2022 - [i15]Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim:
LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation. CoRR abs/2202.13536 (2022) - [i14]Jongmin Lee, Cosmin Paduraru, Daniel J. Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, Arthur Guez:
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation. CoRR abs/2204.08957 (2022) - [i13]Sanghoon Myung, In Huh, Wonik Jang, Jae Myung Choe, Jisu Ryu, Daesin Kim, Kee-Eung Kim, Changwook Jeong:
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning. CoRR abs/2206.05703 (2022) - [i12]Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, Kee-Eung Kim:
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions. CoRR abs/2210.13373 (2022) - 2021
- [j15]Jiyeon Ham, Soohyun Lim, Kyeng-Hun Lee, Kee-Eung Kim:
Corrigendum to 'Extensions to Hybrid Code Networks for FAIR Dialog Data' Computer Speech & Language volume 53 (2019) Pages 80-91. Comput. Speech Lang. 65: 100961 (2021) - [c59]Youngjune Lee, Kee-Eung Kim:
Dual Correction Strategy for Ranking Distillation in Top-N Recommender System. CIKM 2021: 3186-3190 - [c58]Seongho Eun, Bon San Koo, Ji Seon Oh, Kee-Eung Kim, Byungtae Lee:
Personalized Treatment Using Biologics: An Analysis Using Counterfactual Regression Based on Deep Learning. ICIS 2021 - [c57]Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim:
Representation Balancing Offline Model-based Reinforcement Learning. ICLR 2021 - [c56]Youngsoo Jang, Seokin Seo, Jongmin Lee, Kee-Eung Kim:
Monte-Carlo Planning and Learning with Language Action Value Estimates. ICLR 2021 - [c55]Deunsol Yoon, Sunghoon Hong, Byung-Jun Lee, Kee-Eung Kim:
Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic. ICLR 2021 - [c54]Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, Kee-Eung Kim:
OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation. ICML 2021: 6120-6130 - [c53]HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim:
Multi-View Representation Learning via Total Correlation Objective. NeurIPS 2021: 12194-12207 - [i11]Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, Kee-Eung Kim:
OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation. CoRR abs/2106.10783 (2021) - [i10]Youngjune Lee, Kee-Eung Kim:
Dual Correction Strategy for Ranking Distillation in Top-N Recommender System. CoRR abs/2109.03459 (2021) - [i9]Youngjune Lee, Oh Joon Kwon, Haeju Lee, Joonyoung Kim, Kangwook Lee, Kee-Eung Kim:
Augment & Valuate : A Data Enhancement Pipeline for Data-Centric AI. CoRR abs/2112.03837 (2021) - 2020
- [j14]Kee-Eung Kim, Jun Zhu:
Foreword: special issue for the journal track of the 11th Asian Conference on Machine Learning (ACML 2019). Mach. Learn. 109(3): 441-443 (2020) - [j13]Kee-Eung Kim, Vineeth N. Balasubramanian:
Foreword: special issue for the journal track of the 12th Asian conference on machine learning (ACML 2020). Mach. Learn. 109(12): 2243-2245 (2020) - [j12]Jang Won Bae, Junseok Lee, Do-Hyung Kim, Kanghoon Lee, Jongmin Lee, Kee-Eung Kim, Il-Chul Moon:
Layered Behavior Modeling via Combining Descriptive and Prescriptive Approaches: A Case Study of Infantry Company Engagement. IEEE Trans. Syst. Man Cybern. Syst. 50(7): 2551-2565 (2020) - [c52]Byung-Jun Lee, Seunghoon Hong, Kee-Eung Kim:
Residual Neural Processes. AAAI 2020: 4545-4552 - [c51]Jongmin Lee, Wonseok Jeon, Geon-Hyeong Kim, Kee-Eung Kim:
Monte-Carlo Tree Search in Continuous Action Spaces with Value Gradients. AAAI 2020: 4561-4568 - [c50]Youngsoo Jang, Jongmin Lee, Kee-Eung Kim:
Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues. AAAI 2020: 7994-8001 - [c49]DongHoon Ham, Jeong-Gwan Lee, Youngsoo Jang, Kee-Eung Kim:
End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2. ACL 2020: 583-592 - [c48]Geon-Hyeong Kim, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim:
Variational Inference for Sequential Data with Future Likelihood Estimates. ICML 2020: 5296-5305 - [c47]Byung-Jun Lee, Jongmin Lee, Peter Vrancx, Dongho Kim, Kee-Eung Kim:
Batch Reinforcement Learning with Hyperparameter Gradients. ICML 2020: 5725-5735 - [c46]Jongmin Lee, Byung-Jun Lee, Kee-Eung Kim:
Reinforcement Learning for Control with Multiple Frequencies. NeurIPS 2020 - [c45]HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim:
Variational Interaction Information Maximization for Cross-domain Disentanglement. NeurIPS 2020 - [i8]HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim:
Variational Interaction Information Maximization for Cross-domain Disentanglement. CoRR abs/2012.04251 (2020)
2010 – 2019
- 2019
- [j11]Jiyeon Ham, Soohyun Lim, Kyeng-Hun Lee, Kee-Eung Kim:
Extensions to hybrid code networks for FAIR dialog dataset. Comput. Speech Lang. 53: 80-91 (2019) - [j10]Yung-Kyun Noh, Ji Young Park, Byoung Geol Choi, Kee-Eung Kim, Seung-Woon Rha:
A Machine Learning-Based Approach for the Prediction of Acute Coronary Syndrome Requiring Revascularization. J. Medical Syst. 43(8): 253:1-253:8 (2019) - [j9]Kanghoon Lee, Geon-hyeong Kim, Pedro A. Ortega, Daniel D. Lee, Kee-Eung Kim:
Bayesian optimistic Kullback-Leibler exploration. Mach. Learn. 108(5): 765-783 (2019) - [c44]Geon-hyeong Kim, Youngsoo Jang, Jongmin Lee, Wonseok Jeon, Hongseok Yang, Kee-Eung Kim:
Trust Region Sequential Variational Inference. ACML 2019: 1033-1048 - [c43]Youngsoo Jang, Jongmin Lee, Jaeyoung Park, Kyeng-Hun Lee, Pierre Lison, Kee-Eung Kim:
PyOpenDial: A Python-based Domain-Independent Toolkit for Developing Spoken Dialogue Systems with Probabilistic Rules. EMNLP/IJCNLP (3) 2019: 187-192 - [i7]Marcin B. Tomczak, Dongho Kim, Peter Vrancx, Kee-Eung Kim:
Policy Optimization Through Approximated Importance Sampling. CoRR abs/1910.03857 (2019) - 2018
- [j8]Youngsoo Jang, Jiyeon Ham, Byung-Jun Lee, Kee-Eung Kim:
Cross-Language Neural Dialog State Tracker for Large Ontologies Using Hierarchical Attention. IEEE ACM Trans. Audio Speech Lang. Process. 26(11): 2072-2082 (2018) - [c42]Kee-Eung Kim, Hyun Soo Park:
Imitation Learning via Kernel Mean Embedding. AAAI 2018: 3415-3422 - [c41]Eun Sang Cha, Kee-Eung Kim, Stefano Longo, Ankur Mehta:
OP-CAS: Collision Avoidance with Overtaking Maneuvers. ITSC 2018: 1715-1720 - [c40]Wonseok Jeon, Seokin Seo, Kee-Eung Kim:
A Bayesian Approach to Generative Adversarial Imitation Learning. NeurIPS 2018: 7440-7450 - [c39]Jongmin Lee, Geon-hyeong Kim, Pascal Poupart, Kee-Eung Kim:
Monte-Carlo Tree Search for Constrained POMDPs. NeurIPS 2018: 7934-7943 - 2017
- [j7]Robert J. Durrant, Kee-Eung Kim, Geoffrey Holmes, Stephen Marsland, Masashi Sugiyama, Zhi-Hua Zhou:
Foreword: special issue for the journal track of the 8th Asian conference on machine learning (ACML 2016). Mach. Learn. 106(5): 623-625 (2017) - [c38]Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim:
Hierarchically-partitioned Gaussian Process Approximation. AISTATS 2017: 822-831 - [c37]Jongmin Lee, Youngsoo Jang, Pascal Poupart, Kee-Eung Kim:
Constrained Bayesian Reinforcement Learning via Approximate Linear Programming. IJCAI 2017: 2088-2095 - [c36]Yung-Kyun Noh, Masashi Sugiyama, Kee-Eung Kim, Frank C. Park, Daniel D. Lee:
Generative Local Metric Learning for Kernel Regression. NIPS 2017: 2452-2462 - [c35]Jang Won Bae, Bowon Nam, Kee-Eung Kim, Junseok Lee, Il-Chul Moon:
Hybrid modeling and simulation of tactical maneuvers in computer generated force. SMC 2017: 942-947 - 2016
- [j6]Byung-Jun Lee, Kee-Eung Kim:
Dialog History Construction with Long-Short Term Memory for Robust Generative Dialog State Tracking. Dialogue Discourse 7(3): 47-64 (2016) - [c34]Daehyun Lee, Jongmin Lee, Kee-Eung Kim:
Multi-view Automatic Lip-Reading Using Neural Network. ACCV Workshops (2) 2016: 290-302 - [c33]Teakgyu Hong, Jongmin Lee, Kee-Eung Kim, Pedro A. Ortega, Daniel D. Lee:
Bayesian Reinforcement Learning with Behavioral Feedback. IJCAI 2016: 1571-1577 - [c32]Youngsoo Jang, Jiyeon Ham, Byung-Jun Lee, Youngjae Chang, Kee-Eung Kim:
Neural dialog state tracker for large ontologies by attention mechanism. SLT 2016: 531-537 - [e1]Robert J. Durrant, Kee-Eung Kim:
Proceedings of The 8th Asian Conference on Machine Learning, ACML 2016, Hamilton, New Zealand, November 16-18, 2016. JMLR Workshop and Conference Proceedings 63, JMLR.org 2016 [contents] - 2015
- [j5]Jaedeug Choi, Kee-Eung Kim:
Hierarchical Bayesian Inverse Reinforcement Learning. IEEE Trans. Cybern. 45(4): 793-805 (2015) - [c31]Pascal Poupart, Aarti Malhotra, Pei Pei, Kee-Eung Kim, Bongseok Goh, Michael Bowling:
Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes. AAAI 2015: 3342-3348 - [c30]Hyeoneun Kim, Woosang Lim, Kanghoon Lee, Yung-Kyun Noh, Kee-Eung Kim:
Reward Shaping for Model-Based Bayesian Reinforcement Learning. AAAI 2015: 3548-3555 - [c29]Kanghoon Lee, Kee-Eung Kim:
Tighter Value Function Bounds for Bayesian Reinforcement Learning. AAAI 2015: 3556-3563 - [c28]Pedro A. Ortega, Kee-Eung Kim, Daniel D. Lee:
Reactive bandits with attitude. AISTATS 2015 - [i6]Pedro A. Ortega, Daniel A. Braun, Justin Dyer, Kee-Eung Kim, Naftali Tishby:
Information-Theoretic Bounded Rationality. CoRR abs/1512.06789 (2015) - 2014
- [c27]Byung-Jun Lee, Woosang Lim, Daejoong Kim, Kee-Eung Kim:
Optimizing Generative Dialog State Tracker via Cascading Gradient Descent. SIGDIAL Conference 2014: 273-281 - [i5]Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau, Leslie Pack Kaelbling:
Learning to Cooperate via Policy Search. CoRR abs/1408.1484 (2014) - 2013
- [c26]Jaedeug Choi, Kee-Eung Kim:
Bayesian Nonparametric Feature Construction for Inverse Reinforcement Learning. IJCAI 2013: 1287-1293 - [c25]Daejoong Kim, Jaedeug Choi, Kee-Eung Kim, Jungsu Lee, Jinho Sohn:
Engineering Statistical Dialog State Trackers: A Case Study on DSTC. SIGDIAL Conference 2013: 462-466 - [i4]Nicolas Meuleau, Kee-Eung Kim, Leslie Pack Kaelbling, Anthony R. Cassandra:
Solving POMDPs by Searching the Space of Finite Policies. CoRR abs/1301.6720 (2013) - [i3]Nicolas Meuleau, Leonid Peshkin, Kee-Eung Kim, Leslie Pack Kaelbling:
Learning Finite-State Controllers for Partially Observable Environments. CoRR abs/1301.6721 (2013) - 2012
- [j4]Byung Kon Kang, Kee-Eung Kim:
Exploiting symmetries for single- and multi-agent Partially Observable Stochastic Domains. Artif. Intell. 182-183: 32-57 (2012) - [c24]Jaedeug Choi, Kee-Eung Kim:
Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions. NIPS 2012: 314-322 - [c23]Dongho Kim, Kee-Eung Kim, Pascal Poupart:
Cost-Sensitive Exploration in Bayesian Reinforcement Learning. NIPS 2012: 3077-3085 - [i2]Eunsoo Oh, Kee-Eung Kim:
A Geometric Traversal Algorithm for Reward-Uncertain MDPs. CoRR abs/1202.3754 (2012) - 2011
- [j3]Jaedeug Choi, Kee-Eung Kim:
Inverse Reinforcement Learning in Partially Observable Environments. J. Mach. Learn. Res. 12: 691-730 (2011) - [j2]Dongho Kim, Jin H. Kim, Kee-Eung Kim:
Robust Performance Evaluation of POMDP-Based Dialogue Systems. IEEE ACM Trans. Audio Speech Lang. Process. 19(4): 1029-1040 (2011) - [c22]Jaeyoung Park, Kee-Eung Kim, Yoon-Kyu Song:
A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces. AAAI 2011: 1559-1562 - [c21]Pascal Poupart, Kee-Eung Kim, Dongho Kim:
Closing the Gap: Improved Bounds on Optimal POMDP Solutions. ICAPS 2011 - [c20]Dongho Kim, Jaesong Lee, Kee-Eung Kim, Pascal Poupart:
Point-Based Value Iteration for Constrained POMDPs. IJCAI 2011: 1968-1974 - [c19]Jaedeug Choi, Kee-Eung Kim:
MAP Inference for Bayesian Inverse Reinforcement Learning. NIPS 2011: 1989-1997 - [c18]Eunsoo Oh, Kee-Eung Kim:
A Geometric Traversal Algorithm for Reward-Uncertain MDPs. UAI 2011: 565-572 - 2010
- [c17]Jaeyoung Park, Kee-Eung Kim, Sungho Jo:
A POMDP approach to P300-based brain-computer interfaces. IUI 2010: 1-10 - [c16]Youngwook Kim, Kee-Eung Kim:
Point-Based Bounded Policy Iteration for Decentralized POMDPs. PRICAI 2010: 614-619
2000 – 2009
- 2009
- [c15]Jaedeug Choi, Kee-Eung Kim:
Inverse Reinforcement Learning in Partially Observable Environments. IJCAI 2009: 1028-1033 - 2008
- [c14]Kee-Eung Kim:
Exploiting Symmetries in POMDPs for Point-Based Algorithms. AAAI 2008: 1043-1048 - [c13]Hyeong Seop Sim, Kee-Eung Kim, Jin Hyung Kim, Du-Seong Chang, Myoung-Wan Koo:
Symbolic Heuristic Search Value Iteration for Factored POMDPs. AAAI 2008: 1088-1093 - [c12]Dongho Kim, Hyeong Seop Sim, Kee-Eung Kim, Jin Hyung Kim, Hyunjeong Kim, Joo Won Sung:
Effects of user modeling on POMDP-based dialogue systems. INTERSPEECH 2008: 1169-1172 - 2007
- [c11]Kyungmin Min, Seonghun Lee, Kee-Eung Kim, Jin Hyung Kim:
Place Recognition Using Multiple Wearable Cameras. UCS 2007: 266-273 - 2006
- [c10]Kee-Eung Kim, Wook Chang, Sung-Jung Cho, Junghyun Shim, Hyunjeong Lee, Joonah Park, Youngbeom Lee, Sangryoung Kim:
Hand Grip Pattern Recognition for Mobile User Interfaces. AAAI 2006: 1789-1794 - 2005
- [c9]SeongHwan Cho, Kee-Eung Kim:
Variable bandwidth allocation scheme for energy efficient wireless sensor network. ICC 2005: 3314-3318 - 2003
- [j1]Kee-Eung Kim, Thomas L. Dean:
Solving factored MDPs using non-homogeneous partitions. Artif. Intell. 147(1-2): 225-251 (2003) - 2002
- [c8]Kee-Eung Kim, Thomas L. Dean:
Solving Factored MDPs with Large Action Space Using Algebraic Decision Diagrams. PRICAI 2002: 80-89 - 2001
- [b1]Kee-Eung Kim:
Representations and Algorithms for Large Stochastic Planning Problems. Brown University, USA, 2001 - [c7]Kee-Eung Kim, Thomas L. Dean:
Solving Factored MDPs via Non-Homogeneous Partitioning. IJCAI 2001: 683-689 - [i1]Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau, Leslie Pack Kaelbling:
Learning to Cooperate via Policy Search. CoRR cs.LG/0105032 (2001) - 2000
- [c6]Kee-Eung Kim, Thomas L. Dean, Nicolas Meuleau:
Approximate Solutions to Factored Markov Decision Processes via Greedy Search in the Space of Finite State Controllers. AIPS 2000: 323-330 - [c5]