TDMA policy to optimize resource utilization in Wireless Sensor Networks using reinforcement learning for ambient environment

dc.authoridKorhan Cengiz / 0000-0001-6594-8861en_US
dc.authorscopusidKorhan Cengiz / 56522820200en_US
dc.authorwosidKorhan Cengiz / HTN-8060-2023en_US
dc.contributor.authorSah, Dinesh Kumar
dc.contributor.authorAmgoth, Tarachand
dc.contributor.authorCengiz, Korhan
dc.contributor.authorAlshehri, Yasser
dc.contributor.authorAlnazzawi, Noha
dc.date.accessioned2022-10-28T08:57:43Z
dc.date.available2022-10-28T08:57:43Z
dc.date.issued2022en_US
dc.departmentİstinye Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.description.abstractData packet reaches from the end node to sink in a multihop fashion in the internet of things (IoTs) and sensor networks. Usually, a head node (among neighboring or special purpose nodes) can collect data packets from the nodes and forward them further to sink or other head nodes. In Time-division multiple access (TDMA) driven scheduling, nodes often own slots in a time frame and are scheduled for data forwarding in the allotted time slot (owner node) in each time frame. A time frame in which the owner node does not have data to forward goes into sleep mode. Though the supposed owner node is in sleep mode, the corresponding head node is active throughout the time frame. This active period of a head node can cause an increase in energy consumption. Besides, because the head node in an active state does not receive a data packet, it is causing significantly to the throughput, ultimately leading to low channel utilization. We propose the Markov design policy (MDP) for such head nodes to reduce the number of time slots wasted in the time frame in our work. The proposal is the first such kind of MDP-based modeling for node scheduling in TDMA. The simulation results show that the proposed method outperforms existing adaptive scheduling algorithms for channel utilization, end-to-end delay, system utilization, and balance factor.en_US
dc.identifier.citationSah, D. K., Amgoth, T., Cengiz, K., Alshehri, Y., & Alnazzawi, N. (2022). TDMA policy to optimize resource utilization in wireless sensor networks using reinforcement learning for ambient environment. Computer Communications, 195, 162-172. doi:10.1016/j.comcom.2022.08.013en_US
dc.identifier.doi10.1016/j.comcom.2022.08.013en_US
dc.identifier.endpage172en_US
dc.identifier.issn0140-3664en_US
dc.identifier.scopus2-s2.0-85137167085en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage162en_US
dc.identifier.urihttps://doi.org/10.1016/j.comcom.2022.08.013
dc.identifier.urihttps://hdl.handle.net/20.500.12713/3205
dc.identifier.volume195en_US
dc.identifier.wosWOS:000859413200011en_US
dc.identifier.wosqualityQ1 - Q2en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.institutionauthorCengiz, Korhan
dc.language.isoenen_US
dc.publisherElsevier B.V.en_US
dc.relation.ispartofComputer Communicationsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectCross-layeren_US
dc.subjectInternet of Thingsen_US
dc.subjectQ-learningen_US
dc.subjectTime-division Multiple Accessen_US
dc.subjectWireless Sensor Networksen_US
dc.titleTDMA policy to optimize resource utilization in Wireless Sensor Networks using reinforcement learning for ambient environmenten_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
1-s2.0-S0140366422003218-main.pdf
Boyut:
1.12 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam Metin / Full Text
Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.44 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: