IDCC at MWC19 Panel Series Feeds | InterDigtal Booth Demos at MWC19

Edge computing: New paradigms, new architecture

Updated on February 27, 2019 at 3:15 PM CET

It is the nature of hot topics to be high on hype, and low on detail. The words 'edge computing' have been incredibly hot at MWC19, and indeed InterDigital's own booth contains not one, not two, but three demos looking at various aspects of edge computing. But to start fleshing out the topic, InterDigital also brought together some of the leading thinkers on edge developments, on a panel moderated by Bob Gazda, Senior Director, InterDigital Labs.

There is a great deal of excitement about edge computing and virtualization throughout the industry, and this fever is not likely to cool off anytime soon. This is largely because, at the most basic level, edge computing is really about a new way of thinking - a shift from a fully-centralized client/server network computing model and toward a model where certain network and computing resources are pushed further away from the core of the network, out to the customer equipment commodity servers and on-premise access points, and even into consumer, commercial and industrial devices themselves.

When this paradigm becomes a widespread reality, gone are the days where every request from a device has to travel all the way to the network core in order to be processed. Devices will communicate with each other, and will in effect communicate with the network itself. This is a major shift taking place in the industry, to put it mildly. It also represents a fusing of mobile network and basic IT, where specialized telecom equipment gives way to largely software- or cloud-based service operating on standard hardware like IT server rooms.

As with any major shift, there are challenges ahead that must be addressed. Some of these challenges are technical, some are regulatory, and some are business challenges.

What is driving this shift in the industry? What applications will require this kind of approach? First, there's an overall trend toward major improvements in latency, which is probably the single biggest overriding factor driving edge computing. When we improve latency and reduce delay, more things are possible - it has an unlocking effect.

'You can take existing cloud applications that run in the cloud happily, and bring them to the edge to improve their performance, latency and customer experience,' said Dr. Rolf Schuster, Director of the Open Edge Computing Initiative. Alternatively, certain applications are what Schuster calls 'edge-native' meaning that they need the edge or they won't work otherwise. 'For example, a head-mounted display that actually needs the low latency in order to function... it wouldn't work otherwise,' he said. 'We're also seeing applications around drones and automotive that need the edge.'

There's some confusion around just what is meant by 'the edge,' meaning where the border exists somewhere between the network and the device. Arturo Azcorra, Director of IMDEA and Co-Founder of 5TONIC suggested that there probably will need to be multiple borders - there will be a first category of edge that is very close to the user (which he calls 'extreme edge'); a second category of edge slightly further away from the user, probably in the base station; and there will also be the cloud. This three-layered model is important, he says, 'because it will add huge flexibility to address many different types of applications.'

In addition to flexibility, this layered model also leads to discussions of service-based architecture (SBA). As the design of the network shifts away from a core-network dependent model, SBA provides a path forward for the industry. 'If you want to host services close to the end user dynamically, you can't stop at the mobile core network,' said Dirk Trossen, Senior Principal Engineer, InterDigital Labs. 'You have to include the radio access network, and the actual devices themselves as well.'

It's important that the industry not forget about the 'service' aspect of SBA. 'Typically the telco industry has placed too much emphasis on functionality, which drives things toward this point-to-point type of architecture,' said Todd Spraggins, Strategy Director of Oracle's Communications Global Business Unit. 'What's been refreshing with SBA is the notion of having a service that's API-defined, that will let people use innovation to create or consume those services.'

Finally, one of the most interesting and potentially disruptive aspects of edge computing and the fusion of mobile and IT is the potential industry transformation it may trigger. As Laurent Depersin, Director Research and Innovation for Technicolor's HOME Lab, points out, some edge services may not be provided by traditional telcos, and third-party providers will likely see the edge transformation as a trigger to enter the industry. 'I see huge opportunities for verticals: transportation, energy, facility management,' said Depersin. 'Maybe we'll see new actors joining the market and trying to sell this new resource.'

- The InterDigital Communications Team

Live Video Feed: IDCC at MWC19 from Hall 7, Stand 7C61

The Economics of High Frequency Bands in 5G

Updated on February 27, 2019 at 8:23 AM CET

Our very own Alan Carlton - Vice President of InterDigital Labs - was honored to be included in a panel discussion in the mainstage auditorium complex here at Mobile World Congress today. The panel's topic was one on the minds of a lot of network operators these days: the economics of high-frequency bands.

It's long been recognized that as radio access networks are deployed in these bands, network capacity increases greatly, but there's a downside too: high frequencies don't travel as far. Higher frequency networks will have to be much more densely deployed to ensure adequate coverage.

This session featured speakers from Huawei, T-Mobile, and Ericsson. Chaobin Yang from Huawei talked about technical considerations for decreasing cost per bit of data transmission and reception, and the balance that must be achieved between smaller equipment and larger antenna arrays, with massive MIMO being a very attractive option.

Karri Kuoppamaki of T-Mobile provided an operator's viewpoint, highlighting the need for a multi-band approach, with millimeter wave bands covering dense urban areas, mid-frequency bands for the broader metro area, and lower bands outside the metro areas. 'Millimeter wave is not the only frequency band for 5G,' he said.

That approach was echoed by Thomas Noren, head of 5G Commercialization for Ericsson. Noren also emphasized a multiband approach, noting that such an approach allows lower frequency bands to carry more of the traffic that doesn't need high-band connections. Noren also reminded the audience of the fact that even an ultimate 5G deployment will probably include some 3G (and certainly some 4G/LTE) technology for the next several years.

Joining Carlton on the panel were Dr. Li Fung Chang, the 5G program architect for the Industrial Technology Research Institute in Taiwan; and Tiago Rodrigues, general manager of the Wireless Broadband Alliance.

In consensus with the other speakers and panelists, Chang stated that massive MIMO, carrier aggregation and spectrum sharing were going to be good techniques for the development of 5G, but that there are many practical issues to be considered as well.

Carlton discussed how InterDigital's 10-plus years of involvement in the millimeter wave spectrum has involved being strong proponents of a progressive roadmap in the millimeter wave space. 'We very much believe in the economics of high-frequency spectrum,' he said. 'There's lots of reasons for that, mainly the applications: the fronthaul, backhaul and fixed wireless access are commercial products. You can go buy them and deploy them, experiment with them and become experts in millimeter wave technology through them.'

He went on to describe how when 5G new radio was first envisioned, it was never a one-part story that was going to happen below 6 gigahertz. The promise of 5G -- particularly the 100 Mbps bandwidth -- was always going to necessarily involve high-frequency spectrum working cleverly in concert with lower frequency bands. There simply was never going to be a way to achieve the demands of 5G without such an approach.

Another important point to remember, Carlton said, is the cost of spectrum at auction has fallen dramatically. 'Observationally, that spectrum is costing on the order of 40 times cheaper than sub-6Ghz spectrum,' he emphasized. 'If you marry that fact to the vision of 5G -- that we will one day get to the vision of a more open ecosystem in the RAN -- I think it paints a very positive story for the economics of millimeter wave and high-frequency spectrum's application.'

Where millimeter wave technology gets very interesting, in Carlton's view, is with regard to small cell technology. Carlton says that he sees two ways to approach this from an economic perspective. The first, he says, is to piggyback onto legacy small cell technology. With the majority of early 5G NR deployments happening in urban metro areas, the industry can effectively deploy millimeter wave small cells on the backs of LTE small cells. This will allow the carrier to manage the costs on a case-by-case basis instead of doing a massive infrastructure buildout all at once.

The panel and presentations were technically focused, but taking a step back from the presentations one thing was clear. Unlike previous generations of cellular standards, which delivered a single, definable set of technology capabilities and spectrum requirements, 5G involves a broad variety of solutions, for a broad variety of use cases and mobilizing a diverse array of spectrum and network assets. For operators, and for equipment companies, that brings both risk and opportunity. Fascinating times ahead as 5G deploys over the next years.

- The InterDigital Communications Team

What the Use Cases Tell Us About 5G

Updated on February 26, 2019 at 5:11 PM CET

At one time the world had reliable digital voice signals on mobile devices and 3G was under development and people wondered, 'what's the use? What need does it meet that's unmet?' Then Apple revolutionized the handset form factor and made it a vehicle for web-based data, and the reason for 3G became clear. But then the world had solid web-based data to the phone and people weren't 100% clear on what the use was for 4G. But then streamable video services and video enablement on social platforms became a thing, and the need for 4G was clear.

Right up to the launch of 5G, we've been hearing much of the same thing: 'I can already get great HD video to my phone with 4G - what could we possibly need 5G for?' InterDigital hosted a panel on that topic this afternoon at Mobile World Congress, and we haven't looked at a transcript or created a word cloud but one word kept coming back, over and over again that might offer a clue about eventual use cases: latency.

If you look at what we have today, we have reliable delivery of video, yes, but much of that video is pre-produced, encoded on servers, and when you click on it you get a pause, a little spinning circle maybe, and THEN you get the content. And you don't care: a second or two to wait for the buffer to fill and the content you wanted to flow is irrelevant since you're not time-bound. The process isn't interactive: it's request made and then request filled.

But it's becoming clear that that won't be good enough in a 5G world. The use cases that people are talking about - future workplaces, pure interactivity, real-time engagement and tailoring of remote content - needs the extremely low latency that 5G brings. Our own edge-and-fog 5G-Coral demo at MWC involves a phone interacting with a 5G edge server and controlling the view from a 360-degree camera, and the immediacy of the experience is startling. As a non-engineer, I'll describe it as follows: we used to have demos that went along the lines of 'see, you move the handset and now you can see the view moving to match it.' Our demo this year is more about watching the view change as you move the device. It has some 5G latency in it, maybe a touch more than what we'll eventually see in 5G, but it feels immediate.

It's not clear what the use case on that will be, but you can sense its possibilities: sports watching where you can dynamically change your view based on what you'd like to see during a live event. A virtual workplace that feels real. Remote doctors interacting with onsite medical personnel in real time. Group gaming that is simply a level beyond what we have today. But the sense is growing on me that the step-change from previous generations won't be around speed, it will be about latency.

- Patrick Van de Wille

How It All Comes Together

Updated on February 25, 2019 at 5:23 PM CET

Below you can read the report of the first panel we hosted at MWC19, on the topic of Immersive Video. It led me to some thoughts on how research streams eventually come together into an overarching solution that drives a new use case, a new business, a new experience.

The panel was discussing Volumetric Video and immersive experiences, with our guest from Fraunhofer mentioning 1.6 terabytes of data per minute. Do the appropriate napkin math, verified for me here by one of our very serious senior engineers, and that yields a bandwidth need of about 26 Gb/s to deliver an immersive view based on 4 screens at a time. You also run into latency issues, that need to be addressed, as well as computing resource restrictions - both of which are being addressed by edge computing and edge network technology.

Listening to the presentation, I was struck by how discrete research streams can come together at a point in the future to yield a solution. At the time folks are working on research, the use case can seem impossibly far out - and, let's face it, it's business innovators who generally come up with the business ideas, not necessarily research scientists. And research is necessarily specialized and highly involving. It can be tough to look up from it and see the linkages. For example, at some point different researches were working on location services, e-commerce, and communications technology, possibly only dimly aware of each other. At some point, those capabilities combined to form ride-sharing.

So here was a video researcher talking about 32 cameras yielding enormous amount of screens and data and requiring ultra-low latency. Thirty feet away, one of our teams was demonstrating a technology that provides a scalable method for selecting relevant immersive streams so that not all the screens need to be delivered to the device, saving bandwidth and computing resources. Fifteen feet away from that, our edge computing and connectivity people were showing a technology that enables large video streams to be processed at the edge, reducing latency to a minimum.

Eventually, it's easy to see the possibility of all three technologies being implemented at once, in the same solution. That solution might be called immersive sports. It might be called the collaborative workplace of the future. And that's how it all comes together.

- Patrick Van de Wille

Volumetric Video: Science Fiction or Reality?

Updated on February 25, 2019 at 3:50 PM CET | WATCH THE FULL PANEL NOW >

Virtual reality and augmented reality have been scaling the hype curve for many years now. The technology is exciting and impressive, and filled with possibility. It is also still in a stage of development where it faces a range of very real technological challenges. But recent developments in immersive image and video capture and editing, coupled with advancements in display and streaming/distribution technology, appear to be drawing us all closer to a time and place where those challenges will be more completely addressed.

AT MWC19 in Barcelona today, a fascinating panel discussion took place around the topic of immersive video. Moderated by Gael Seydoux, Director of Research and Innovation for Technicolor, the panel was comprised of three executives at the forefront of technology development in this space.

WATCH THE FULL PANEL NOW >

One of the demos at InterDigital's MWC19 booth is a volumetric photo booth, that uses a 16-camera array to capture depth and volume via parallax, providing an immersive experience without the need for headsets. 'We need volumetric video today for VR experiences,' said Valerie Allié, Technical Area Leader of Light Field and Photonics for Technicolor. 'When you experience VR, you feel that something is missing, and what's missing is the volumetric effect, especially for real video. What we're demonstrating here at MWC19 is that volumetric video can be experienced on a smartphone or another 2D display, and you do have a different experience than standard video.'

Allié's comments were contrasted by those of Mimesys CEO Rémi Rousseau, who discussed work his company was doing to develop real-world front-end applications for this technology - specifically, future workplace and collaboration capabilities. One such application is what he describes as a sort of 'holographic conferencing' application - akin to the holodeck we may remember from Star Trek or the holograhic communications seen throughout the many Star Wars films. 'We're fortunate to have 40 years of science fiction that shows us the path for volumetric communication,' Rousseau said with a laugh. 'We realized that the 'killer' use case for VR & AR is about communication, about presence, about the feeling of being there with someone.'

While companies like Mimesys are doing their development work largely using off-the-shelf capture sensors like Microsoft Kinect, Dr. Ralf Schaefer, director of Research and Innovation at the Fraunhofer Institute for Telecommunications, is taking a more academic and research-scale approach to solving these complex problems. Using a framework of immersive videoconferencing and a studio studded with no less than 32 high-resolution cameras, Schaefer and his team are working in part to define what volumetric video really means and how it can be applied.

'The problem with videoconferencing today is that you have a camera, which looks at you; and through the display you always look down because the camera looks above you,' says Schaefer. 'So we started to look at the problem of how we establish eye contact and correct the viewing angles to create a more realistic conferencing experience.'

According to Schaefer, what 'volumetric' really means in the video context is that you have a 3D model that's molded from computers, that can be manipulated and viewed from all sides, but using real people as the source imagery. This is all very complex, and these videos create mountains of data - 1.6 terabytes of data in fact for each minute of volumetric videoconference, according to Schaefer's research.

Not only is there a significant data management and bandwidth challenge, there's a processing challenge as well. 'We're able to render any view on a subject with our video capture methods,' says Allié. 'If we reduce the number of cameras we use for capture, we can reduce the amount of time required to process the images.' It's certainly a balance between image quality and processing speed/latency. The data processing challenges are fairly monumental still.

'Real-time processing is probably not feasible at the moment,' says Rousseau. 'It's too much data right now to achieve the very low latency we need for a truly real-time experience.'

But the industry is working hard to overcome these challenges. The image quality is improving, and some hybrid experiments look promising for intermediate solutions. Part of the panel discussion involved a theoretical way that a high-resolution volumetric still image of a subject could be the basis for some computer-assisted animation. The highest quality images that can be captured today are usually captured in a highly-controlled studio environment. This indicates that consumer applications for volumetric video may be further away than those employed by the enterprise or industrial and entertainment sectors.

'We are confident that this technology could get to real-time capability in the future,' says Schaefer 'But it likely won't be in the home right away.'

WATCH THE FULL PANEL NOW >

All in all, the volumetric video space is certainly going to be an interesting one to watch in the next several years, as improvements in image capture, bandwidth and latency help to carry what was once a science fiction fantasy into a very actual reality.

- The InterDigital Communications Team

MWC and Avoiding 'Show of Everything and Nothing' Syndrome

Updated on February 25, 2019 at 10:43 AM CET

Having been to Mobile World Congress yearly now for a period of well over a decade, I've seen this conference evolve. It has evolved alongside this industry, which is the most transformative industry the world has seen since… maybe since agriculture. And MWC has evolved beautifully: it is certainly the most interesting and impressive industry conference in the world.

And yet every year, that marveling at the incredible evolution of the show is accompanied by a sense of fear that this conference, which has grown enormous, will reach and exceed that point where a show simply becomes too big. What has saved this show from that fate has been the combination we see in the wireless industry of incredible diversity of solution but tremendous unity of purpose: to connect things, new things, better, faster, and more seamlessly.

My fear is rooted in seeing the evolution that has taken place at that other tech industry mega-show, CES in Las Vegas. There was a time when the consumer electronics industry was small enough and unified around a handful of themes: major home electronics, gaming and toys, home entertainment, perhaps some automotive. That made CES a great show. But then what we saw was the integration of electronics capability into everything, and so CES became a show about everything. Everything and nothing. One year, to get from a meeting to a wireless company booth visit, I had to cut across three halls showing exercise equipment, vacuum cleaners, plush toys that spoke, massage chairs, the car stereo section…

This year in Barcelona, I've seen the first possible stirrings of that. As we entered the convention center, one of the outdoor relaxation areas, with chairs and tables, was fully taken over by Husqvarna, the Swedish makers of everything from motorcycles to chain saws. The theme of the area was the company's wireless autonomous lawnmowers.

Now don't get me wrong, I have no issue with what I assume is a fine product. But if MWC becomes a show that includes companies that simple sell a product that includes a wireless connection, we'll be in trouble. Because quite soon, the world will contain many, many products that include wireless connections. Drones. Windows. Scooters. Medical equipment. As our CEO Bill Merritt is fond of saying, in the future if something can produce some sort of data, it will be connected.

I'm not worried yet. Walking the halls and looking at the major demos, there's still a unity of purpose that makes this show great, and the GSMA's content people have uncompromisingly made the conference portion the highlight of the wireless year, with tremendous topics - some general, some quite technical, all relevant. InterDigital is lucky to have two people participating in that, the third year in a row we've been asked to speak. But every year I wonder how long we'll be able to balance the continued growth of the industry and expansion of wireless into new areas with the focus that has made MWC great.

- Patrick Van de Wille

Live Updates from IDCC at #MWC19

Posted on February 25, 2019 at 9:00 AM CET

Greetings from Barcelona! Our team is here, and all are putting the final touches on everything in preparation for Mobile World Congress 2019. As with every year, there are many exciting events planned for the upcoming week, and we'll be writing about them here. If you haven't already, you may want to bookmark this page to read updates from the show floor, and to watch our live video feed. As they are published, these posts will also be linked via our Twitter and LinkedIn pages, using the hashtags #MWC19 and #IDCCatMWC19.

We're hosting a live panel series each day at our booth, featuring a global collection of industry thought leaders. More detail on these panel discussions and a list of speakers can be found at https://www.interdigital.com/post/interdigital-live-from-mobile-world-congress-2019#. We'll post recaps, summaries and insights after each of those panel sessions, and other content as well.

There will also be a live video feed from the booth available on this page during show hours all week.

If you're at the show, be sure to stop by for a visit -- Hall 7, Stand 7C61 -- to see technology demonstrations of our radio and core network test beds, some new VR streaming technology, an emulator for researching autonomous vehicle safety via edge computing, and discussion of 5G standards work.

We look forward to seeing you and hope you enjoy MWC19!

- The InterDigital Communications Team

Attachments

Disclaimer

InterDigital Inc. published this content on 27 February 2019 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 27 February 2019 14:34:02 UTC