OFDM Relay Cognitive Radio Multiple Antennas Resource Allocation Full Duplex Spectrum Sensing Synchronization Spectrum Sharing Interference Cancellation Channel Estimation Feedback Stochastic Geometry Bi-directional Energy Harvesting Heterogeneous Networks FBMC HetNet Equalization relay networks Ultra Low Power TVWS MIMO interference SC-FDMA CDMA Duplex channel capacity interference suppression 5G Reliability C-V2V in-band full-duplex system OCBT CLI body area networks 5G mobile communication antenna arrays NR positioning health care Resource sharing Location-based LTE-TDD FS-NOMA Power allocation OTDOA hybrid control overhead estimated position overlapping resource block management amplify and forward communication power uncertainty CoMP quality of service cellular radio Rat-dependent positioning telecommunication traffic Handoff Femtocell inter user interference UFMC mode selection GFDM QAM Zigbee frame structure Vehicular communication Vehicle-to-vehicle communication non-orthogonal multiple access QR Factorization Spatial capacity LTE-based V2V Number of training blocks Communication range user fairness Mode 3 resource selection distributed mode maximum likelihood method Metaheuristics cross-link interference Dynamic TDD Uplink SCMA system V2X DQN estimated position updating —Device-to-device (D2D) spectrum partitioning dynamic HetNet indoor positioning D-TDD - Computation offloading multi-access edge computing P-NOMA partial overlap Subband filtering Multi-user Receiver reinforcement learning RSRP weighting smart factory
Status : Presented 
Date : 2020-12 
Title : Deep Reinforcement Learning-based Task Offloading Decision in the Time Varying Channel 
Authors : Jinkyo Jeong, Ilmin Kim, and Daesik Hong 
Conference : ICEIC 
Abstract : This paper proposes a dynamic task offloading decision control scheme to minimize the total delay to execute computation task taking into account the time-varying channel. Specifically, we consider the practical task offloading process, where executing computation task is carried out over multiple channel coherence times. In order to make an accurate decision on the task offloading process performed over multiple channel coherence times, we utilize the model-free reinforcement learning, since environment dynamics of the system, channel transition probabilities, is challenging to estimate. We formulate a problem of minimizing the total delay of executing computation task based on a Markov decision process (MDP). In order to solve the MDP problem, we develop a model-free reinforcement learning algorithm. Simulation results show that our proposed scheme outperforms the conventional scheme. 

.

List of Articles
No.
Statussort Date
» [ICEIC] Jinkyo Jeong, Ilmin Kim, and Daesik Hong, "Deep Reinforcement Learning-based Task Offloading Decision in the Time Varying Channel" ICEIC 2021 Presented  2020-12 
1 [ITC-CSCC] Kwonjong Lee, Jihaeng Heo, Yosub Park, and Daesik Hong,"A New Dynamic CoMP Based on User Location in Downlink Small Cell Networks", ITC-CSCC, July 2013 file Presented  2013-07