Log in
E-mail
Password
Remember
Forgot password ?
Become a member for free
Sign up
Sign up
Settings
Settings
Dynamic quotes 
OFFON

MarketScreener Homepage  >  Equities  >  TAIWAN STOCK EXCHANGE  >  MediaTek Inc.    2454   TW0002454006

MEDIATEK INC.

(2454)
  Report  
SummaryNewsRatingsCalendarCompanyFinancialsConsensusRevisions 
News SummaryMost relevantAll newsOfficial PublicationsSector newsAnalyst Recommendations

Patent Issued for Method And Apparatus For Selective Filtering Of Cubic-Face Frames (USPTO 10,368,067): MediaTek Inc.

share with twitter share with LinkedIn share with facebook
share via e-mail
0
08/14/2019 | 05:59pm EDT

2019 AUG 14 (NewsRx) -- By a News Reporter-Staff News Editor at Taiwan Daily Report -- A patent by the inventors Lin, Hung-Chih (Caotun Township, Nantou County, Taiwan); Lin, Jian-Liang (Su’ao Township, Yilan County, Taiwan); Li, Chia-Ying (Taipei, Taiwan); Huang, Chao-Chih (Zhubei, Taiwan); Chang, Shen-Kai (Zhubei, Taiwan), filed on June 9, 2017, was published online on August 12, 2019, according to news reporting originating from Alexandria, Virginia, by NewsRx correspondents.

Patent number 10,368,067 is assigned to MediaTek Inc. (Hsin-Chu, Taiwan).

The following quote was obtained by the news editors from the background information supplied by the inventors: “The 360-degree video, also known as immersive video is an emerging technology, which can provide ‘feeling as sensation of present’. The sense of immersion is achieved by surrounding a user with wrap-around scene covering a panoramic view, in particular, 360-degree field of view. The ‘feeling as sensation of present’ can be further improved by stereographic rendering. Accordingly, the panoramic video is being widely used in Virtual Reality (VR) applications.

“Immersive video involves the capturing a scene using multiple cameras to cover a panoramic view, such as 360-degree field of view. The immersive camera usually uses a set of cameras, arranged to capture 360-degree field of view. Typically, two or more cameras are used for the immersive camera. All videos must be taken simultaneously and separate fragments (also called separate perspectives) of the scene are recorded. Furthermore, the set of cameras are often arranged to capture views horizontally, while other arrangements of the cameras are possible.

“FIG. 1 illustrates an exemplary processing chain for 360-degree spherical panoramic pictures. The 360-degree spherical panoramic pictures may be captured using a 360-degree spherical panoramic camera, such as a 3D capture device. Spherical image processing unit 110 accepts the raw image data from the 3D capture device to form 360-degree spherical panoramic pictures. The spherical image processing may include image stitching and camera calibration. The spherical image processing is known in the field and the details are omitted in this disclosure. An example of a 360-degree spherical panoramic picture from the spherical image processing unit 110 is shown as picture 112 in FIG. 1. The top side of the 360-degree spherical panoramic picture corresponds to the vertical top (or sky) and the bottom side points to ground if the camera is oriented so that the top points up. However, if the camera is equipped with a gyro, the vertical top side can always be determined regardless how the camera is oriented. In the 360-degree spherical panoramic format, the contents in the scene appear to be distorted. Often, the spherical format is projected to the surfaces of a cube as an alternative 360-degree format. The conversion can be performed by a projection conversion unit 120 to derive the six face images 122 corresponding to the six faces of a cube. On the faces of the cube, these six images are connected at the edges of the cube.

“In order to preserve the continuity of neighboring cubic faces sharing a common cubic edge, various cubic face assembly techniques have been disclosed in an related U.S. Non-provisional patent application Ser. No. 15/390,954, files on Dec. 27, 2016, with some common inventors and the same assignee. The assembled cubic-face frames may help to improve coding efficiency. Accordingly, cubic face assembler 130 is used to assemble the six cubic faces into an assembled cubic-face frame. The assembled image sequence is then subject to further processing. The cubic face assembler 130 may generate fully connected cubic-face frames or partially connected cubic-face frames. Since the 360-degree image sequences may require large storage space or require high bandwidth for transmission, video encoding by a video encoder 140 may be applied to the video sequence consisting of a sequence of assembled cubic-face frames. At a receiver side or display side, the compressed video data is decoded using a video decoder 150 to recover the sequence of assembled cubic-face frames for display on a display device (e.g. a 3D display). Information related to the assembled cubic-face frames may be provided to the video encoder 140 for encoding efficiently and/or properly and rendering appropriately.

“FIG. 2 illustrates an example of the project conversion process to project a spherical panoramic picture into six cubic faces on a cube 210. The six cubic faces are separated into two groups. The first group 220 corresponds to the three cubic faces, labelled as 3, 4 and 5, that are visible from the front side. The second group 230 corresponds to the three cubic faces, labelled as 1, 2 and 6, that are visible from the back side of the cube.

“In conventional video coding or processing, the coding or processing system always assumes the input video sequence. Therefore, the cubic faces are further assembled into cubic-face frames. FIG. 3A illustrates two examples cubic-face assembled frames (310 and 320) with blank areas, where two sets of fully interconnected cubic faces correspond to two different way of unfolding the six faces from the cube. The unfolded cubic faces (also called a cubic net) are fitted into a smallest rectangular frame with blank areas filled with dummy data.

“FIG. 3B illustrates examples of another type of cubic-face assembling, where the six faces are assembled into a rectangular frame without blank area. In FIG. 3B, frame 330 corresponds to a 1.times.6 assembled cubic frame, frame 340 corresponds to a 2.times.3 assembled cubic frame, frame 350 corresponds to a 3.times.2 assembled cubic frame and frame 360 corresponds to a 6.times.1 assembled cubic frame. As shown in FIG. 3B, the six cubic faces are compactly fitted into a rectangle without any blank area.

“FIG. 4A illustrates an exemplary block diagram of a video encoder system, such as HEVC (High Efficiency Video Coding), incorporating adaptive Inter/Intra prediction. The system includes two prediction modes: Inter prediction 420 and Intra prediction 430. The Inter Prediction 420 utilizes motion estimation (ME) and motion compensation (MC) to generate temporal prediction for a current frame 410 based on previous reconstructed picture or pictures. The previous reconstructed pictures, also referred as reference pictures, are stored in the Frame Buffer 480. As is known in the field, the ME for the Inter prediction uses translational motion model, where the motion can be specified by an associated motion vector. The Intra prediction 430 generates a predictor for a current block by using reconstructed pixels at neighboring blocks in the same slice or picture. A switch 445 is used to select among Inter prediction 420 and the Intra prediction 430. The selected prediction is subtracted from the corresponding signal of the current frame to generate prediction residuals using an Adder 440. The prediction residuals are processed using DCT (Discrete Cosine Transform) and Quantization (DCT/Q) 450 followed by Entropy Coder 460 to generate video bitstream. Since reconstructed pictures are also required in the encoder side to form reference pictures. Accordingly, Inverse Quantization and Inverse DCT (IQ/IDCT) 452 are also used to generate reconstructed prediction residuals. The reconstructed residuals are then added with the prediction selected by the switch 445 to form reconstructed video data using another adder 442. In-loop Filtering 470 is often used to reduce coding artifacts due to compression before the reconstructed video is stored in the Frame Buffer 480. For example, deblocking filter and Sample Adaptive Offset (SAO) have been used in HEVC. Adaptive Loop Filter (ALF) is another type of in-loop filter that may be used to reduce artifacts in coded images.

“FIG. 4B illustrates an example of decoder system block diagram corresponding to the encoder in FIG. 4A. In FIG. 4A, the encoder side also includes a decoder loop to reconstruct the reference video at the encoder side. Most decoder components are used in the encoder side already except for the Entropy Decoder 461. Furthermore, only motion compensation is required for Inter prediction decoder 421 since the motion vectors can be derived from the video bitstream and there is no need for searching for the best motion vectors.

“As shown in FIG. 4A and FIG. 4B, a coding system often applies filtering to the reconstructed image in order to enhance visual quality by reducing the coding artifacts. In other video processing systems, filtering may also be applied to the underlying frames to reduce noise or to enhance image quality. However, the assembled frames converted from 3D source video may contain some special features that may cause artifacts or reduce coding efficiency during conventional filtering. According, the present invention addresses filtering issues associated with assembled cubic frames.”

In addition to the background information obtained for this patent, NewsRx journalists also obtained the inventors’ summary information for this patent: “Methods and apparatus of processing cube face images are disclosed. According to embodiments of the present invention, one or more discontinuous boundaries within each assembled cubic frame are determined and used for selective filtering, where the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled. Furthermore, the filtering process is applied to one or more continuous areas in each assembled cubic frame.

“When the selected cubic face format corresponds to one assembled cubic frame with blank areas, each discontinuous boundary is located between one cubic face and one blank area. When the selected cubic face format corresponds to one assembled cubic frame without blank area, each discontinuous boundary is located between two neighboring cubic faces not sharing a common cubic edge. The assembled cubic frame without blank area may correspond to a 1.times.6 assembled cubic frame, a 2.times.3 assembled cubic frame, a 3.times.2 assembled cubic frame or a 6.times.1 assembled cubic frame.

“The filtering process may correspond to in-loop filtering in video encoding or video decoding. For example, the filtering process may comprise de-blocking, sample adaptive offset (SAO), adaptive loop filter (ALF), or a combination thereof. Whether the filtering process is applied to one or more continuous areas in each assembled cubic frame, whether the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame or both can be indicated by signaling syntax of on/off control in a video bitstream at an encoder side or determined by parsing the syntax of on/off control in the video bitstream at a decoder side. The syntax of on/off control can be incorporated in a sequence, video, cubic face, VPS (video parameter set), SPS (sequence parameter set), or APS (application parameter set) level of the video bitstream.

“Whether the filtering process is applied to one or more continuous areas in each assembled cubic frame, whether the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame or both may also be indicated by signaling the selected cubic face format in a video bitstream at an encoder side or determined by parsing the selected cubic face format in the video bitstream at a decoder side. In one embodiment, the filtering process is skipped for all discontinuous boundaries between cubic faces and blank areas in assembled cubic frames with blank areas and for all discontinuous boundaries between neighboring cubic faces not sharing a common cubic edge in assembled cubic frames without blank areas. Whether the filtering process is applied to one or more continuous cubic face boundaries in each assembled cubic frame can be further indicated by signaling syntax of on/off control in a video bitstream at an encoder side or determined by parsing the syntax of on/off control in the video bitstream at a decoder side. In one embodiment, the syntax of on/off control is signaled at the encoder side or is parsed at the decoder side to control the filtering process for all continuous or discontinuous cubic face boundaries. In another embodiment, the syntax of on/off control is signaled at the encoder side for each cubic face boundary or parsed at the decoder side to control the filtering process for each cubic face boundary.”

The claims supplied by the inventors are:

“The invention claimed is:

“1. A method of processing cube face images, the method comprising: receiving sets of six cubic faces converted from spherical images in a 360-degree panoramic video sequence, wherein each set of six cubic faces corresponds to one spherical image projected onto a cube for rendering 360-degree virtual reality; assembling each set of cubic faces into one assembled cubic frame according to a selected cubic face format; determining one or more discontinuous boundaries within each assembled cubic frame; and processing the assembled cubic frames according to information related to said one or more discontinuous boundaries, wherein said processing the assembled cubic frames comprises: skipping filtering process at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled.

“2. The method of claim 1, wherein each discontinuous boundary is located between one cubic face and one blank area.

“3. The method of claim 1, wherein each discontinuous boundary is located between two neighboring cubic faces not sharing a common cubic edge.

“4. The method of claim 1, wherein the filtering process corresponds to in-loop filtering in video encoding or video decoding.

“5. The method of claim 4, wherein the filtering process comprises de-blocking, sample adaptive offset (SAO), adaptive loop filter (ALF), or a combination thereof.

“6. The method of claim 1, wherein the syntax of on/off control is in a sequence, video, cubic face, VPS (video parameter set), SPS (sequence parameter set), or APS (application parameter set) level of the video bitstream.

“7. The method of claim 1, wherein the filtering process is skipped for all discontinuous boundaries between cubic faces and blank areas in assembled cubic frames with blank areas and for all discontinuous boundaries between neighboring cubic faces not sharing a common cubic edge in assembled cubic frames without blank areas.

“8. The method of claim 1, wherein whether the filtering process is applied to one or more continuous cubic face boundaries in each assembled cubic frame is further indicated by signaling syntax of on/off control in a video bitstream at an encoder side or determined by parsing the syntax of on/off control in the video bitstream at a decoder side.

“9. The method of claim 8, wherein the syntax of on/off control is signaled at the encoder side or is parsed at the decoder side to control the filtering process for all continuous cubic face boundaries.

“10. The method of claim 8, wherein the syntax of on/off control is signaled at the encoder side for each cubic face boundary or parsed at the decoder side to control the filtering process for each cubic face boundary.

“11. An apparatus for processing cube faces, the apparatus comprising one or more electronic circuits or processor arranged to: receive sets of six cubic faces converted from spherical images in a 360-degree panoramic video sequence, wherein each set of six cubic faces corresponds to one spherical image projected onto a cube for rendering 360-degree virtual reality; assemble each set of cubic faces into one assembled cubic frame according to a selected cubic face format; determine one or more discontinuous boundaries within each assembled cubic frame; and process the assembled cubic frames according to information related to said one or more discontinuous boundaries, wherein said processing the assembled cubic frames comprises: apply filtering process to one or more continuous areas in each assembled cubic frame and skipping the filtering process at said one or more discontinuous boundaries within each assembled cubic frame.”

URL and more information on this patent, see: Lin, Hung-Chih; Lin, Jian-Liang; Li, Chia-Ying; Huang, Chao-Chih; Chang, Shen-Kai. Method And Apparatus For Selective Filtering Of Cubic-Face Frames. U.S. Patent Number 10,368,067, filed June 9, 2017, and published online on August 12, 2019. Patent URL: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=10,368,067.PN.&OS=PN/10,368,067RS=PN/10,368,067

(Our reports deliver fact-based news of research and discoveries from around the world.)

Copyright © 2019 NewsRx LLC, Taiwan Daily Report, source Geographic Newsletters

share with twitter share with LinkedIn share with facebook
share via e-mail
0
Latest news on MEDIATEK INC.
08/09GLOBAL MARKETS LIVE : Bayer, Huawei, Airbus, Occidental Petroleum…
07/30MEDIATEK : Introduces New Helio G Series Chipsets - Helio G90 & G90T - and Hyper..
PR
07/28MediaTek Teases its First Gaming Chip for Smartphones
AQ
07/26Asian firms cut capex, weakening outlook for demand and jobs
RE
07/18MEDIATEK : Launches AI IoT Platform i700
AQ
07/08MEDIATEK INC. : Ex-dividend day for
FA
07/08MEDIATEK INC. : Ex-dividend day for final dividend
FA
07/07MEDIATEK : Launches a Mid-range Helio P65 Chipset With Enhanced AI
AQ
06/26MEDIATEK : Principled Technologies Study - A Chromebook Powered by an Intel Cele..
AQ
06/16Mediatek claims its first 5g chip is fastest
AQ
More news
Financials (TWD)
Sales 2019 246 B
EBIT 2019 23 421 M
Net income 2019 24 003 M
Finance 2019 126 B
Yield 2019 2,77%
P/E ratio 2019 23,1x
P/E ratio 2020 19,1x
EV / Sales2019 1,74x
EV / Sales2020 1,56x
Capitalization 554 B
Chart MEDIATEK INC.
Duration : Period :
MediaTek Inc. Technical Analysis Chart | MarketScreener
Full-screen chart
Income Statement Evolution
Consensus
Sell
Buy
Mean consensus OUTPERFORM
Number of Analysts 24
Average target price 356,70  TWD
Last Close Price 351,00  TWD
Spread / Highest target 22,5%
Spread / Average Target 1,62%
Spread / Lowest Target -33,0%
EPS Revisions
Managers
NameTitle
Ching Chiang Hsieh Vice Chairman & Co-Chief Executive Officer
Li Hsing Tsai Chief Executive Officer & Director
Ming Chieh Tsai Chairman & Manager
Ta Wei Ku CFO, Head-Accounting & Deputy General Manager
Chen Yao Sun Director
Sector and Competitors
1st jan.Capitalization (M$)
MEDIATEK INC.17 773
SKYWORKS SOLUTIONS15.03%13 237
UNIVERSAL DISPLAY CORPORATION127.80%10 041
SAN'AN OPTOELECTRONICS CO.2.48%7 009
SHENNAN CIRCUITS CO LTD--.--%6 137
WUS PRINTED CIRCUIT (KUNSHAN) CO.,LTD.--.--%4 648