{"id":2,"date":"2020-06-04T15:58:01","date_gmt":"2020-06-04T15:58:01","guid":{"rendered":"https:\/\/research.ece.ncsu.edu\/ai5gcompetition\/?page_id=2"},"modified":"2020-10-14T20:28:48","modified_gmt":"2020-10-15T00:28:48","slug":"sample-page","status":"publish","type":"page","link":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/","title":{"rendered":"Welcome"},"content":{"rendered":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p style=\"font-size:49px\" class=\"has-text-color has-text-align-center has-wolfpack-red-color\"><strong>ITU Artificial Intelligence\/Machine Learning in 5G Challenge<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/ituLogo.png\" alt=\"\" class=\"wp-image-8\" width=\"182\" height=\"209\" srcset=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/ituLogo.png 860w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/ituLogo-261x300.png 261w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/ituLogo-768x881.png 768w\" sizes=\"auto, (max-width: 182px) 100vw, 182px\" \/><\/figure><\/div>\n\n\n\n<p>North Carolina State University invites  you to participate in the ML5G-PHY [channel estimation] challenge, which is part of the  <a rel=\"noreferrer noopener\" href=\"https:\/\/www.itu.int\/en\/ITU-T\/AI\/challenge\/2020\" target=\"_blank\"><strong>ITU Artificial Intelligence\/Machine Learning in 5G Challenge<\/strong><\/a>, a competition that is scheduled to run from now until the end of the year. Participation in the Challenge is free of charge and open to all interested parties from countries that are members of ITU. Detailed information about the motivation for this competition can be found on the<strong>&nbsp;<a href=\"https:\/\/www.itu.int\/en\/ITU-T\/AI\/challenge\/2020\">Challenge website<\/a><\/strong>, which includes the document \u201c<a rel=\"noreferrer noopener\" href=\"https:\/\/www.itu.int\/en\/ITU-T\/AI\/challenge\/2020\/Documents\/ITU%20ML5G%20Global%20Challenge_proposal_v23.docx\" target=\"_blank\"><strong>ITU AI\/ML 5G Challenge \u2013 Applying AI\/ML in 5G networks. A Primer<\/strong><\/a>\u201d.<\/p>\n\n\n\n<p>In the subsequent sections, we present the details of our challenge, \u201cMachine Learning Applied to the Physical Layer of Millimeter-Wave MIMO Systems [channel estimation]\u201d at North Carolina State University (ML5G-PHY [channel estimation]), which is based on&nbsp;<a href=\"https:\/\/www.lasse.ufpa.br\/raymobtime\/\"><strong>Raymobtime datasets<\/strong><\/a>.<\/p>\n\n\n\n<div id=\"overview\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p>The ML5G-PHY channel estimation challenge attacks one of the most difficult problems in the 5G physical layer: acquiring channel information to establish a millimeter wave MIMO link (initial access) considering a hybrid MIMO architecture. Approaches in the challenge will lead to important insights into what can be achieved using data-driven and\/or model-based approaches.<\/p>\n\n\n\n<p>Participants are encouraged to design either a ML-based approach or a more conventional signal processing algorithm that can learn some priors from the provided training data set to provide high accuracy channel estimates with low training overhead during the testing phase.<\/p>\n\n\n\n<h3 class=\"has-wolfpack-red-color has-text-color has-text-align-center wp-block-heading\">Challenge: Site-specific channel estimation with hybrid MIMO architectures<\/h3>\n\n\n\n<p><strong>In our site-specific channel estimation challenge, we focus on the uplink channel estimation problem. A set of training channels and training received pilots specific for the area covered by a given base station (BS) are available during off-line training<\/strong>. These data sets can be used either to train a given network or to learn priors that can be leveraged by a conventional algorithm, such as AoA\/AoD distributions, possible sparsity patterns, etc.&nbsp; In the testing phase,&nbsp; a different set of channels, still corresponding to the same site, will be used to evaluate the performance&nbsp; of the proposed approaches.The acquired data will correspond to a frequency selective hybrid millimeter wave MIMO-OFDM system as described in &nbsp;<a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#references\">[1]<\/a>&#8211;<a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#references\">[4]<\/a>, where both the transmitter and receiver are equipped with a hybrid architecture as in <a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#Fig1\">Figure 1<\/a>. The precoders and combiners are hybrid, splitting the processing into an analog and a digital stage. The system operates with uniform linear arrays (ULAs) at both ends. In particular, we consider a transmitter at the user equipment (UE) side with N<sub>t<\/sub>=16 antennas and L<sub>t<\/sub>=2 RF chains; the receiver at the BS has N<sub>r<\/sub>=64 antennas and L<sub>r<\/sub>=4 RF chains. The number of streams to be transmitted is set as N<sub>s<\/sub>=2. The MIMO-OFDM system operates with K=256. The mmWave channel is assumed to be frequency selective.<\/p>\n\n\n\n<div id=\"Fig1\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3.png\" rel=\"Fig1\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"205\" src=\"https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-1024x205.png\" alt=\"\" class=\"wp-image-97\" srcset=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-1024x205.png 1024w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-300x60.png 300w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-768x154.png 768w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-1536x307.png 1536w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/MIMO-OFDM-Hybrid-3-2048x410.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption>Figure 1: Millimeter wave MIMO system based on hybrid architecture. In the site-specific channel estimation challenge, the BS operates as receiver and the UE as transmitter.<\/figcaption><\/figure><\/div>\n\n\n\n<p>We define a training pilot as an OFDM symbol known at both the TX and the RX. <strong>The challenge consists of estimating the frequency selective MIMO channel at low SNR from a low number of received training pilots<\/strong>. <strong>ML-based solutions or any type of conventional approach that exploits millimeter wave channel sparsity can be submitted as a proposed solution to the challenge<\/strong>. Note that conventional algorithms can also use the training data to learn any type of prior to be leveraged by the proposed algorithm. For example, in <a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#references\">[5]<\/a>, online learning is used to obtain the AoD statistics at a BS and design a compressed sensing matrix for compressive beam alignment.<\/p>\n\n\n\n<div id=\"datasets\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p>The <strong><a href=\"https:\/\/drive.google.com\/file\/d\/17LEaNTnwIBUlfY0ESLcU6Q8AfH7fjE8j\/view?usp=sharing\">Channel Training Dataset that we provide here<\/a><\/strong> consist of a collection of <strong>10,000 channels <\/strong>in HDF5 format obtained from &nbsp;<a href=\"https:\/\/www.lasse.ufpa.br\/raymobtime\/\"><strong>Raymobtime dataset<\/strong><\/a> s004, generated at the servers at UFPA and UT Austin hosted by the research groups coordinated by Prof. Aldebaro Klautau, Prof. Robert Heath and Prof. Nuria Gonz\u00e1lez-Prelcic (last two faculty members now at NC State University). Raymobtime data sets are generated by ray tracing. The methodology used to generate the channels is summarized in <a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#Fig2\">Figure 2<\/a> and described in detail in <a href=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/#references\">[6]<\/a>. <\/p>\n\n\n\n<p>Participants have to train their networks using 100 received pilots in the frequency domain for each one of the provided channels. To generate the pilots, <strong>we provide a Matlab implementation of the previously described MIMO-OFDM system<\/strong>, which considers analog-only pseudorandom precoders and combiners during training as described in &nbsp;[1], [2]. <strong><a href=\"https:\/\/drive.google.com\/file\/d\/1jyWiL5yrns8fom_BcZkWEvu3HRHphRn5\/view?usp=sharing\">The Matlab code can be downloaded here<\/a><\/strong>. The script to be executed is &#8220;gen_RXtraining_SNR_Raymobtime&#8221;. The script allows to load or save data in MAT or HDF5 files. The second script, &#8220;gen_channel_ray_tracing rev&#8221; creates the MIMO channel matrices from the ray tracing channels in the training data set. It is called by &#8220;gen_RXtraining_SNR_Raymobtime&#8221;. <em>Please, note that this second script had been revised<\/em>.<\/p>\n\n\n\n<p><strong>Using the provided code  and the Channel Training Dataset, participants should generate three training datasets containing  the received pilots for SNR=-15dB, -10 dB, and -5 dB.<\/strong> Each training set corresponds to a given SNR. Thus, Training Dataset 1 corresponds to SNR=-15 dB; Training Dataset 2 corresponds to SNR=-10 dB, and Training Dataset 3 corresponds to -5 dB. Participants only have to set up the parameter called data_set in the provided script to 1, 2 or 3. By doing this, the code initializes the SNR and the file name that stores the received training symbols to their corresponding values.<\/p>\n\n\n\n<div id=\"Fig2\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"369\" src=\"https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-1024x369.png\" alt=\"\" class=\"wp-image-9\" title=\"\" srcset=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-1024x369.png 1024w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-300x108.png 300w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-768x277.png 768w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-1536x554.png 1536w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-2048x739.png 2048w, https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/uploads\/sites\/2\/2020\/06\/data-flow-channel-estimation-1500x539.png 1500w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Figure 2. Block diagram of how the Raymobtime datasets are used for the ML5G-PHY channel estimation challenge and software packaged used by the Raymobtime methodology.<\/figcaption><\/figure><\/div>\n\n\n\n<p><strong>As test datasets, we provide 9 collections of received pilots obtained at SNRs ranging from -20 to 0 dB and 1000 channels different from the ones in the training datasets, but corresponding to the same site. <\/strong><em><strong>We also provide the corresponding sets of pilot symbols, and associated precoders and combiners<\/strong><\/em>.  The channels corresponding to these pilots will not be available. To be able to understand the trade-off accuracy-overhead provided by a given approach, we generated nine different test datasets that contain a different number of received pilots per test channel.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong><a href=\"https:\/\/drive.google.com\/file\/d\/104HXRDXbq6Y9VBjrV0pB43Og_0rhA1MO\/view?usp=sharing\">Test Dataset 1 SNR1<\/a> <\/strong>contains 20 received pilots for each one of the considered test channels when the SNR is in the range  [-20dB,-11dB[.<strong> <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1Npaa848a3nyoXcZo8zalxrYFAK14aoAN\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset1 SNR1<\/a>, contains the 20  pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/176AHXcJKDKPp3BDyKVjus3VMVbCdHbKi\/view?usp=sharing\"><strong>Test Dataset 1 SNR2<\/strong><\/a><strong><strong> <\/strong><\/strong>contains 20 received pilots for each one of the considered test channels when the SNR is in the range [-11dB,-6dB[<strong>. <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1UxXIAAbV4uB1fbolDO__t3D2gp98yqlS\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset1 SNR2<\/a>, contains the 20 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><strong><a href=\"https:\/\/drive.google.com\/file\/d\/15oa4GXu6SN3tsPshuVntJfyWAO8BtJVV\/view?usp=sharing\">Test Dataset 1 SNR3<\/a> <\/strong><\/strong>contains 20 received pilots for each one of the considered test channels when the SNR is in the range [-6dB,0dB]. <\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1oXofBri-SofDqrdK3CEfdfDcuWR_iTSl\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset1 SNR3<\/a>, contains the 20 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1SarwE04Z46aGGnxD49rce3SgGJqMEbJ1\/view?usp=sharing\">Test Dataset 2 SNR1<\/a> <\/strong>contains 40 received pilots for each one of the considered test channels when the SNR is in the range [-20dB,-11dB[.<strong> <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1iwIDPXw9GtysFtHT37tjljxJwSZuJSgy\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset2 SNR1<\/a>, contains the 40 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1LU1EV0xCdM-XZa1c7BwTWEpZ4p_Ps-b6\/view?usp=sharing\">Test Dataset 2 SNR2<\/a> <\/strong><\/strong>contains 40 received pilots for each one of the considered test channels when the SNR is in the range [-11dB,-6dB[<strong>. <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1vE2cHQYHLyaOMerJr_C0Swy-2FT5Belz\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset2 SNR2<\/a>, contains the 40 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1sGqssurg1l4zYiKiM--ssUotW2cBrJhB\/view?usp=sharing\">Test Dataset 2 SNR3<\/a> <\/strong><\/strong>contains 20 received pilots for each one of the considered test channels when the SNR is in the range [-6dB,0dB]. <\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1pXznDT1rXOuH1__KZRQwzJeAKb4AonKH\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset2 SNR3<\/a>, contains the 40 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1vkatymVuMyFhUTuM9dwDjmkgwPgV-31A\/view?usp=sharing\">Test Dataset 3 SNR1<\/a> <\/strong>contains 80 received pilots for each one of the considered test channels when the SNR is in the range [-20dB,-11dB[.<strong> <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1u7slRm3kUjSJSwoAkm-fvtC0BXGDYfhs\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset3 SNR1<\/a>, contains the 80 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1mmEEbhlWi_VACdFSKxiFQf85SZDfEc5_\/view?usp=sharing\">Test Dataset 3 SNR2<\/a> <\/strong><\/strong>contains 80 received pilots for each one of the considered test channels when the SNR is in the range [-11dB,-6dB[<strong>. <\/strong><\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1EPjkIHE87fCV4jZVS8mJd4L-_Q4QcT5U\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset3 SNR2<\/a>, contains the 80 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><li><strong><strong><a href=\"https:\/\/drive.google.com\/file\/d\/1iUF26qh8erqBVT4M85gyY-8duZp0i1Sk\/view?usp=sharing\">Test Dataset 3 SNR3<\/a> <\/strong><\/strong>contains 80 received pilots for each one of the considered test channels when the SNR is in the range [-6dB,0dB].<\/li><li><a href=\"https:\/\/drive.google.com\/file\/d\/1oSijdKxiqbPBwDq2zgwd4zYKN8msnAMM\/view?usp=sharing\">Pilots, precoders and combiners for Test Dataset3 SNR3<\/a>, contains the 80 pilots per TX RF chain, and their corresponding precoders and combiners.<\/li><\/ul>\n\n\n\n<div id=\"evaluation\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p>To evaluate the different approaches proposed by the participants, we consider the normalized mean square error (NMSE) in the channel estimates as basic metric. To obtain a final score we weight the obtained NMSE in a different way depending on the SNR range and training length, giving more weight to the more challenging settings (lower SNR and less training). This way, the final performance score (PS) is obtained as:<\/p>\n\n\n\n<p>PS=0.5(0.5NMSE(Test Dataset 1 SNR1)+0.3NMSE(Test Dataset 1 SNR2)+0.2NMSE(Test Dataset 1 SNR3))<\/p>\n\n\n\n<p>+ 0.3(0.5 NMSE(Test Dataset 2 SNR1+0.3 NMSE(Test Dataset 2 SNR2)+0.2NMSE(Test Dataset 2 SNR3))<\/p>\n\n\n\n<p>+ 0.2(0.5 NMSE(Test Dataset 3 SNR1)+0.3 NMSE(Test Dataset 3 SNR2)+0.2NMSE(Test Dataset 3 SNR3))<\/p>\n\n\n\n<div id=\"rules\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p>Models must be trained only with examples included in the provided datasets. It is not allowed to use additional data extracted from other datasets.<\/p>\n\n\n\n<p>You can participate in teams. The team members should be announced at the enrollment stage and will be considered to have an equal contribution.<\/p>\n\n\n\n<p>The participants have to submit a brief document (up to 5 pages) in English describing the proposed approach, the source code of the proposed solution and the estimated channels following the same format as in the Test Datasets. In particular, we require nine files containing the estimated channels for the nine Test Datasets. The provided information and models must allow us to replicate the reported results.<\/p>\n\n\n\n<div id=\"timeline\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p class=\"has-text-color has-wolfpack-red-color\"><strong>Registration<\/strong> \u2192 <strong>July 31, 2020, defined by ITU<\/strong><\/p>\n\n\n\n<p class=\"has-text-color has-wolfpack-red-color\"><strong>Submission<\/strong> (Global round) \u2192&nbsp;<strong>October 2020, to be defined by ITU<\/strong><\/p>\n\n\n\n<p class=\"has-text-color has-wolfpack-red-color\"><strong>Award<\/strong> (Global round) \u2192&nbsp;<strong>October 2020, to be defined by ITU<\/strong><\/p>\n\n\n\n<div id=\"contact\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<p>All participants of the ML5G-PHY [channel estimation] task are required to register at the&nbsp;<a href=\"https:\/\/www.itu.int\/net4\/CRM\/xreg\/web\/Login.aspx?src=Registration&amp;Event=C-00007607\"><strong>ITU website<\/strong><\/a>&nbsp;<strong>before July 31, 2020<\/strong>, and also enroll the teams by sending an email to ml5gphy.ncsu@gmail.com. We will send a confirmation email for team enrollment in a few hours. In the email, inform the team name, the name of each participant (recall that each one must have registered individually at the mentioned ITU website), and an email for contact (if not the email used for enrollment).<\/p>\n\n\n\n<p>Also, all participants are strongly encouraged to join the ITU Challenge slack channel &nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/itu-challenge.slack.com\/\" target=\"_blank\">https:\/\/itu-challenge.slack.com<\/a> for announcements and questions\/comments. Instructions to join the Slack channel are available at&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/join.slack.com\/t\/itu-challenge\/shared_invite\/zt-eql00z05-CXelo7_aL0nHGM7xDDvTmA\" target=\"_blank\">https:\/\/join.slack.com\/t\/itu-challenge\/shared_invite\/zt-eql00z05-CXelo7_aL0nHGM7xDDvTmA<\/a>. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<div id=\"organizers\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n<div class=\"tmm tmm_organizers\"><div class=\"tmm_3_columns tmm_wrap tmm_theme_f\"><span class=\"tmm_two_containers_tablet\"><\/span><div class=\"tmm_container\"><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_0\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/nuria.jpeg); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Nuria<\/span> <span class=\"tmm_lname\">Gonz\u00e1lez-Prelcic<\/span><\/div><div class=\"tmm_job\">NC State University<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"https:\/\/scholar.google.es\/citations?user=ZkPtA-kAAAAJ&#038;hl=en\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_1\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/aldebaro.jpg); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Aldebaro<\/span> <span class=\"tmm_lname\">Klautau<\/span><\/div><div class=\"tmm_job\">LASSE\/UFPA<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"https:\/\/www.lasse.ufpa.br\/aldebaro\/\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><span class=\"tmm_two_containers_tablet\"><\/span><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_2\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/heath2-1024x1024-1.jpg); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Robert<\/span> <span class=\"tmm_lname\">Heath Jr.<\/span><\/div><div class=\"tmm_job\">NC State University<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"http:\/\/www.profheath.org\/home\/\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><\/div><span class=\"tmm_columns_containers_desktop\"><\/span><div class=\"tmm_container\"><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_3\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/Photo_Guvenc_Ismail_Oct2016.jpg); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Ismail<\/span> <span class=\"tmm_lname\">Guvenc<\/span><\/div><div class=\"tmm_job\">NC State University<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"https:\/\/www.ece.ncsu.edu\/people\/iguvenc\/\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><span class=\"tmm_two_containers_tablet\"><\/span><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_4\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/Wenqing.png); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Wenqing<\/span> <span class=\"tmm_lname\">Zheng<\/span><\/div><div class=\"tmm_job\">UT Austin<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"https:\/\/www.linkedin.com\/in\/wenqing-zheng-098012176\/\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><div class=\"tmm_member\" style=\"border-top:#008473 solid 5px;\"><div class=\"tmm_photo tmm_pic_organizers_5\" style=\"background: url(https:\/\/research.ece.ncsu.edu\/wp-content\/uploads\/sites\/2\/2020\/06\/IMG_5670-768x1024-1.jpg); margin-left: auto; margin-right:auto; background-size:cover !important;\"><\/div><div class=\"tmm_textblock\"><div class=\"tmm_names\"><span class=\"tmm_fname\">Ilan<\/span> <span class=\"tmm_lname\">Sousa<\/span><\/div><div class=\"tmm_job\">LASSE\/UFPA<\/div><div class=\"tmm_scblock\"><a target=\"_blank\" class=\"tmm_sociallink\" href=\"http:\/\/lattes.cnpq.br\/1722150437378806\" title=\"\"><img decoding=\"async\" alt=\"\" src=\"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-content\/plugins\/team-members\/inc\/img\/links\/website.png\"\/><\/a><\/div><\/div><\/div><div style=\"clear:both;\"><\/div><\/div><\/div><\/div>\n\n\n\n<div id=\"references\" style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n[1]&nbsp;J. Rodr\u00edguez-Fern\u00e1ndez, N. Gonz\u00e1lez-Prelcic, K. Venugopal and R. W. Heath, &#8220;Frequency-Domain Compressive Channel Estimation for Frequency-Selective Hybrid Millimeter Wave MIMO Systems,&#8221; IEEE Transactions on Wireless Communications, vol. 17, no. 5, pp. 2946-2960, May 2018.<\/p>\n\n\n\n[2]&nbsp;J. P. Gonz\u00e1lez-Coma, J. Rodr\u00edguez-Fern\u00e1ndez, N. Gonz\u00e1lez-Prelcic, L. Castedo and R. W. Heath, &#8220;Channel Estimation and Hybrid Precoding for Frequency Selective Multiuser mmWave MIMO Systems,&#8221; IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 2, pp. 353-367, May 2018<\/p>\n\n\n\n[3] C. K. Anjinappa, A. C. Gurbuz, Y. Yapici and \u0130. G\u00fcven\u00e7, &#8220;Off-Grid Aware Channel and Covariance Estimation in mmWave Networks,&#8221; <em>IEEE Transactions on Communications<\/em>, Mar. 2020 (Early Access).<\/p>\n\n\n\n[4] M. Ruble and I. G\u00fcven\u00e7, &#8220;Multilinear SVD for Millimeter Wave Channel Parameter Estimation,&#8221; IEEE Access, vol. 8, pp. 75592-75606, Apr. 2020.&nbsp;<\/p>\n\n\n\n[5] Y. Wang, N. Jonathan Myers, N. Gonzalez-Prelcic, and Robert W. Heath Jr., \u201cSite-specific online compressive beam codebook learning in mmWave vehicular communication,\u201d submitted to IEEE Transactions on Wireless Communications, May 2020, available in arXiv.<\/p>\n\n\n\n[6]&nbsp;A. Klautau, P. Batista, N. Gonz\u00e1lez-Prelcic, Y. Wang and R. W. Heath, &#8220;5G MIMO Data for Machine Learning: Application to Beam-Selection Using Deep Learning,&#8221; in Proc. of the Information Theory and Applications Workshop (ITA), San Diego, CA, 2018, pp. 1-9.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ITU Artificial Intelligence\/Machine Learning in 5G Challenge North Carolina State University invites you to participate in the ML5G-PHY [channel estimation] challenge, which is part of&#8230;<\/p>\n","protected":false},"author":2,"featured_media":137,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"page-landing.php","meta":{"footnotes":""},"class_list":["post-2","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/pages\/2","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/comments?post=2"}],"version-history":[{"count":85,"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/pages\/2\/revisions"}],"predecessor-version":[{"id":168,"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/pages\/2\/revisions\/168"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/media\/137"}],"wp:attachment":[{"href":"https:\/\/research.ece.ncsu.edu\/ai5gchallenge\/wp-json\/wp\/v2\/media?parent=2"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}