Khursheed Aurangzeb
Title: AI-Enabled Retinal Diseases Diagnostics Systems: Revolutionizing Eye Care
Bio:
Khursheed Aurangzeb (Senior Member IEEE) is Associate professor in the Department of Computer Engineering, College of Computer and Information Sciences at King Saud University (KSU), Riyadh, Saudi Arabia. He received his Ph.D. degree in Electronics Design from Mid Sweden University Sweden in June 2013, MS degree in Electrical Engineering (System on Chip Design) from Linkoping University, Sweden in 2009. He received his B.S. degree in Computer Engineering from COMSATS Institute of Information Technology Abbottabad, Pakistan in 2006. Dr. Khursheed has authored and co-authored more than 120 publications including IEEE / ACM / Springer / Hindawi / Sage/ Tech Science/ MDPI journals, and flagship conference papers. He has obtained more than 15 years of excellent experience as an instructor and researcher in data analytics, machine/deep learning, signal processing, biomedical image processing, electronics circuits/systems, and embedded systems. He has been involved in many research projects as a principal investigator and a co-principal investigator. His research interest is in the diverse fields of embedded systems, signal processing, wireless sensor networks, and camera-based sensor networks with an emphasis on big data and machine/deep learning with applications in healthcare, biomedical engineering, and smart grids.
Abstract:
The design and development of automated tools/frameworks for diagnoses of different chronic diseases became possible due to the rapid progression of artificial intelligence, deep/machine learning, data analytics, and the cheap availability of high-performance computational systems. The ground truth images for different diseases are retained in online databases. These databases are helpful for researchers in the development of automated diagnostics systems and improving their performance. The focus in current explorations is automated diagnosis tools development for diagnosis of Glaucoma and diabetic retinopathy (DR). For the diagnosis of the mentioned eye diseases, many different retinal fundus image databases are available online such as STARE, CHASE-DB, DRIVE, RIMONE, and DRISHTI-GS. The number of ground truth images in these online retinal fundus image databases is limited. For effective training and testing of the performance of any developed deep learning model, significantly large image databases are required. Normally, the researchers use data augmentation techniques to create a large number of images from the available small number of images in any online database. The DR and Glaucoma cause partial or full blindness if not discovered and treated in the early stages. These diseases progress gradually and unobtrusively and are difficult to reverse/treat in their advanced stages. Manual procedure for the diagnosis of these diseases for large-scale screening at the population level is not feasible and requires sufficient time from ophthalmologists. Glaucoma and DR diagnosis at an early stage is vital for timely initiation of its treatment and for preventing possible vision loss. For Glaucoma diagnosis, an accurate estimation of the cup-to-disc ratio (CDR) is needed. On the other hand, for DR diagnosis, segmentation of retinal vessels and the developed lesions in the retina are required. The existing models for Glaucoma and DR diagnosis involve deeper deep-learning models, comprising a large number of parameters, which results in higher system complexity and training/testing time. We have developed several different approaches for different eye disease classification, where we faced different challenges in each of the tasks. In the current work, the aim is to unfold the various effective solutions for such challenges. These strategies/solutions will assist the researchers in mitigating such issues during the development of an automated system for eye disease classification.
Trung Q. Duong
Title: Joint Optimal Design of Communications and Computing for Digital Twin-enabled Metaverse
Bio:
Trung Q. Duong (IEEE Fellow and AAIA Fellow) is a Canada Excellence Research Chair and Full Professor at Memorial University of Newfoundland, Canada. He is also a Research Chair of the Royal Academy of Engineering and an adjunct Chair Professor in Telecommunications at Queen’s University Belfast, UK. His current research interests include quantum optimisation and machine learning in wireless communications. He has published 500+ books/bookchapters/papers with 18,000+ citations and h-index 74. He has served as an Editor for many reputable IEEE journals (IEEE Trans on Wireless Communications, IEEE Trans on Communications, IEEE Trans on Vehicular Technology, EEE Communications Surveys & Tutorials, IEEE Communications Letters, and IEEE Wireless Communications Letters) and has been awarded best paper awards in many flagship conferences including IEEE ICC 2014, IEEE GLOBECOM 2016, 2019, and 2022. He is the recipient of the Research Fellowship (2015-2020) and Research Chair (2020-2025) of the Royal Academy of Engineering. In 2017, he was awarded the Newton Prize from the UK government. He is a Fellow of IEEE and a Fellow of Asia-Pacific Artificial Intelligence Association (AAIA).
Abstract:
It is expected that there will be 100 Billion ‘Internet-of-Things’ devices by the year 2025. Thus, the need for improved wireless reliability and latency is greater than ever. However, implementing algorithms to ensure low-latency communication for massive numbers of power-constrained mobile devices conflicts directly with the need for ultra-reliability. Recent advances in communication technologies and powerful computation platforms open opportunities to implement a wide range of breakthrough applications, especially for time sensitive services in industrial automation. In terms of communication perspective, 6G with ultra-reliable and low latency communications (URLLC) will play a vital role in the development and deployment of mission-critical applications, which require high demands on reliability and low latency communications. This opens opportunities to enable a wide range of new applications such as virtual reality (VR) with a 360-degree view, factory automation, autonomous vehicles, remote healthcare, etc. In addition, the development of digital twin opens new opportunities for transforming the cyber-physical systems in terms of intelligence, efficiency and flexibility. However, there are still many technical issues to be resolved to achieve high reliability and low latency with digital twin and apply this technology in practical scenarios due to the complexity of resource allocation in short packet transmissions. This talk will discuss digital twin technologies in industrial automation that require high data rates with ultra-reliability at very low latency for which URLLC is a natural choice. This talk discusses a joint communications and computation design of URLLC multi-tier computing in 6G that supports digital twin networks, not only fundamental requirements, but also enabling technologies, visions, and future challenges