![](https://dblp1.uni-trier.de/img/logo.ua.320x120.png)
![](https://dblp1.uni-trier.de/img/dropdown.dark.16x16.png)
![](https://dblp1.uni-trier.de/img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
![search dblp](https://dblp1.uni-trier.de/img/search.dark.16x16.png)
default search action
Signal Processing: Image Communication, Volume 69
Volume 69, November 2018
- Yuming Fang, Xiaoqiang Zhang, Nevrez Imamoglu
:
A novel superpixel-based saliency detection model for 360-degree images. 1-7 - Marc Assens, Xavier Giró-i-Nieto
, Kevin McGuinness
, Noel E. O'Connor:
Scanpath and saliency prediction on 360 degree images. 8-14 - Yucheng Zhu, Guangtao Zhai
, Xiongkuo Min
:
The prediction of head and eye movement for 360 degree images. 15-25 - Rafael Monroy
, Sebastian Lutz, Tejo Chalasani, Aljosa Smolic:
SalNet360: Saliency maps for omni-directional images with CNN. 26-34 - Jesús Gutiérrez
, Erwan J. David
, Yashas Rai, Patrick Le Callet:
Toolbox and dataset for the development of saliency and scanpath models for omnidirectional/360° still images. 35-42 - Mikhail Startsev, Michael Dorr
:
360-aware saliency estimation with conventional image saliency predictors. 43-52 - Federica Battisti
, Sara Baldoni
, Michele Brizzi
, Marco Carli
:
A feature-based approach for saliency estimation of omni-directional images. 53-59 - Jing Ling
, Kao Zhang
, Yingxue Zhang, Daiqin Yang, Zhenzhong Chen
:
A saliency prediction model on 360 degree images using color dictionary based sparse representation. 60-68 - Pierre R. Lebreton
, Alexander Raake
:
GBVS360, BMS360, ProSal: Extending existing saliency prediction models from 2D to omnidirectional images. 69-78
![](https://dblp1.uni-trier.de/img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.