portrait

Dr. Amir Semmo

Computer Graphics Engineer, Researcher and Technical Artist


E: amir.semmo (at) digitalmasterpieces.com (mailto)

A: Digital Masterpieces GmbH, August-Bebel-Str. 26-53, 14482 Potsdam, Germany

T: +49 (0)30 34408 1778

My research topics are related to image and video processing, computer vision and GPU computing. I am particularly interested in expressive rendering under the umbrella of interactive casual creativity, and stylization of multi-dimensional image and video data.

Current research focus: computational aesthetics, non-photorealistic rendering, image and video abstraction, convolutional neural networks for image stylization

Digital Masterpieces GmbH
Head of R&D
Aug. 2019 - now
Hasso Plattner Institute
University of Potsdam, Germany
Post-doctoral Researcher
Nov. 2016 - Jan. 2023
Nanyang Technological University
Singapore
Visiting Fellow
Feb. 2018 - Apr. 2018
Hasso Plattner Institute
University of Potsdam, Germany
Oct. 2005 - Nov. 2016
Dr. rer. nat., 2016
M. Sc., 2011
B. Sc., 2009

June 2023

Our paper “Interactive Control over Temporal Consistency while Stylizing Video Streams” has been accepted to the Eurographics Symposium on Rendering and will be published in a special issue of Computer Graphics Forum. The code is available from here. EGSR 2023 will take place from 28th of June to 30th of June in Delft, Netherlands.

November 2022

We updated our mobile iOS apps Oilbrush and Waterbrush with a new Rendering Core. Learn more about the new features in this blog post: Our New Rendering Core. Both apps are available from the App Store.

October 2022

Our paper “WISE: Whitebox Image Stylization by Example-based Learning” has been accepted to the European Conference for Computer Vision. The code is available from here. ECCV 2022 will take place from 23rd of October to 27th of October in Tel Aviv.

September 2021

We are very honored to receive the Best Paper award for “Interactive Multi-level Stroke Control for Neural Style Transfer” at Cyberworlds 2021.

July 2021

We updated our mobile iOS app Graphite to version 1.4. Learn more about the new features in this video: Graphite 1.4 Showreel @ YouTube. Graphite is available from the App Store

May 2021

Our paper “Service-based Analysis and Abstraction for Content Moderation of Digital Images” has been accepted to Graphics Interface 2021. GI 2021 will take place in May as a virtual event.

May 2021

Our submission “MotionViz: Artistic Visualization of Human Motion on Mobile Devices” has been accepted to the SIGGRAPH 2021 Appy Hour program. SIGGRAPH 2021 will take place in August as a virtual event.

February 2021

Our education paper “Teaching Data-driven Video Processing via Crowdsourced Data Collection” has also been accepted as a contribution to Eurographics 2021 and will be presented in May as part of Eurographics' Education Papers track.

February 2021

Our paper “Interactive Photo Editing on Smartphones via Intrinsic Decomposition” has been accepted as a contribution to Eurographics 2021 and will be published in Computer Graphics Forum. Eurographics 2021 will take place in May as a virtual event.

June 2020

Our submission “Graphite: Interactive Photo-to-Drawing Stylization on Mobile Devices” has been accepted to the SIGGRAPH 2020 Appy Hour program. SIGGRAPH 2020 will take place in August as a virtual event.

All artworks are based on original photos that have been transformed using interactive image stylization techniques. The techniques are implemented and available in iOS apps published by Digital Masterpieces.

Pencil/Pastel/Ink (Graphite)
Watercolor (Waterbrush)
Oil Paint (BeCasso)
2023

Interactive Control over Temporal Consistency while Stylizing Video Streams

Sumit Shekhar, Max Reimann, Moritz Hilscher, Amir Semmo, Jürgen Döllner, and Matthias Trapp
Computer Graphics Forum (Proceedings Eurographics Symposium on Rendering) 2023

Abstract, BibTeX, DOI, Project Page, Paper, Code (GitHub)

Image stylization has seen significant advancement and widespread interest over the years, leading to the development of a multitude of techniques. Extending these stylization techniques, such as Neural Style Transfer (NST), to videos is often achieved by applying them on a per-frame basis. However, per-frame stylization usually lacks temporal consistency, expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal consistency suffer from one or more of the following drawbacks: They (1) are only suitable for a limited range of techniques, (2) do not support online processing as they require the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency control. Domain-agnostic techniques for temporal consistency aim to eradicate flickering completely but typically disregard aesthetic aspects. For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that stylizes video streams in real-time at full HD resolutions while providing interactive consistency control. We develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. Further, we employ an adaptive combination of local and global consistency features and enable interactive selection between them. Objective and subjective evaluations demonstrate that our method is superior to state-of-the-art video consistency approaches.

@article{Shekhar:2023:ICT,
  author = { Shekhar, Sumit and Reimann, Max and Hilscher, Moritz and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Interactive Control over Temporal Consistency while Stylizing Video Streams },
  journal = { Computer Graphics Forum },
  year = { 2023 },
  volume = { 42 },
  number = { 4 },
  note = { Proceedings EGSR 2023 },
  doi = { 10.1111/cgf.14891 }
}
2022

WISE: Whitebox Image Stylization by Example-based Learning

Winfried Lötzsch, Max Reimann, Martin Büßemeyer, Amir Semmo, Jürgen Döllner, and Matthias Trapp
European Conference on Computer Vision (ECCV) 2022

Abstract, BibTeX, DOI, Paper (PDF), Project Page, Code (GitHub), Online Demo (Hugging Face)

Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering. In contrast to deep learning-based methods, these heuristics-based filtering techniques can operate on high-resolution images, are interpretable, and can be parameterized according to various design aspects. However, adapting or extending these techniques to produce new styles is often a tedious and error-prone task that requires expert knowledge. We propose a new paradigm to alleviate this problem: implementing algorithmic image filtering techniques as differentiable operations that can learn parametrizations aligned to certain reference styles. To this end, we present WISE, an example-based image-processing system that can handle a multitude of stylization techniques, such as watercolor, oil, or cartoon stylization, within a common framework. By training parameter prediction networks for global and local filter parameterizations, we can simultaneously adapt effects to reference styles and image content, e.g., to enhance facial features. Our method can be optimized in a style-transfer framework or learned in a generative-adversarial setting for image-to-image translation. We demonstrate that jointly training an xDoG filter and a CNN for postprocessing can achieve comparable results to a state-of-the-art GAN-based method.

@inproceedings{Lotzsch:2022:WIS,
  author = { L{\"o}tzsch, Winfried and Reimann, Max and B{\"u}{\ss}emeyer, Martin and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { {WISE: Whitebox Image Stylization by Example-based Learning} },
  year = { 2022 },
  pages = { 135--152 },
  location = { Tel Aviv, Israel },
  booktitle = { Proceedings European Conference on Computer Vision (ECCV) },
  publisher = { Springer-Verlag },
  address = { Berlin, Heidelberg },
  doi = { 10.1007/978-3-031-19790-1_9 }
}

Trios: Stylistic Rendering of 3D Photos

Ulrike Bath, Sumit Shekhar, Hendrik Tjabben, Amir Semmo, Sebastian Pasewaldt, Jürgen Döllner, and Matthias Trapp
ACM SIGGRAPH Appy Hour 2022

Abstract, BibTeX, DOI

3D photography has emerged as a medium that provides an immersive dimension to 2D photos. We present Trios, an interactive mobile app that combines the vividness of image-based artistic rendering with 3D photos by implementing an end-to-end pipeline for their generation and stylization. Trios uses Apple’s accelerated image-processing APIs and dedicated Neural Engine for depth-generation and learning-based artistic rendering. The pipeline runs at interactive frame rates and outputs a compact video, which can easily be shared. Thus, it serves as a unique interactive tool for digital artists interested in creating immersive artistic content.

@inproceedings{Bath:2022:TSR,
  author = { Bath, Ulrike and Shekhar, Sumit and Tjabben, Hendrik and Semmo, Amir and Pasewaldt, Sebastian and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Trios: Stylistic Rendering of 3D Photos },
  year = { 2022 },
  location = { Vancouver, BC, Canada },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3532723.3535467 }
}

CERVI: Collaborative Editing of Raster and Vector Images

Ulrike Bath, Julian Egbert, Julian Schmidt, Amir Semmo, Jürgen Döllner, and Matthias Trapp
The Visual Computer 2022

Abstract, BibTeX, DOI, Paper (PDF)

Various web-based image-editing tools and web-based collaborative tools exist in isolation. Research focusing to bridge the gap between these two domains is sparse. We respond to the above and develop prototype groupware for real-time collaborative editing of raster and vector images in a web browser. To better understand the requirements, we conduct a preliminary user study and establish communication and synchronization as key elements. The existing groupware for text documents or presentations handles the above through well-established techniques. However, those cannot be extended as it is for raster or vector graphics manipulation. To this end, we develop a document model that is maintained by a server and is delivered and synchronized to multiple clients. Our prototypical implementation is based on a scalable client–server architecture: using WebGL for interactive browser-based rendering and WebSocket connections to maintain synchronization. We evaluate our work qualitatively through a post-deployment user study for three different scenarios. For quantitative evaluation, we perform a thorough performance measure on both client and server side, thereby identifying design recommendations for future concurrent image-editing software(s).

@article{Bath:2022:CER,
  author = { Bath, Ulrike and Shekhar, Sumit and Egbert, Julian and Schmidt, Julian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { CERVI: Collaborative Editing of Raster and Vector Images },
  journal = { The Visual Computer },
  year = { 2022 },
  volume = { 38 },
  number = { 12 },
  pages = { 4057--4070 },
  note = { Proceedings Cyberworlds 2022 },
  doi = { 10.1007/s00371-022-02522-1 }
}

Controlling Strokes in Fast Neural Style Transfer using Content Transforms

Max Reimann, Benito Buchheim, Amir Semmo, Jürgen Döllner, and Matthias Trapp
The Visual Computer 2022

Abstract, BibTeX, DOI, Paper (PDF)

Fast style transfer methods have recently gained popularity in art-related applications as they make a generalized real-time stylization of images practicable. However, they are mostly limited to one-shot stylizations concerning the interactive adjustment of style elements. In particular, the expressive control over stroke sizes or stroke orientations remains an open challenge. To this end, we propose a novel stroke-adjustable fast style transfer network that enables simultaneous control over the stroke size and intensity, and allows a wider range of expressive editing than current approaches by utilizing the scale-variance of convolutional neural networks. Furthermore, we introduce a network-agnostic approach for style-element editing by applying reversible input transformations that can adjust strokes in the stylized output. At this, stroke orientations can be adjusted, and warping-based effects can be applied to stylistic elements, such as swirls or waves. To demonstrate the real-world applicability of our approach, we present StyleTune, a mobile app for interactive editing of neural style transfers at multiple levels of control. Our app allows stroke adjustments on a global and local level. It furthermore implements an on-device patch-based upsampling step that enables users to achieve results with high output fidelity and resolutions of more than 20 megapixels. Our approach allows users to art-direct their creations and achieve results that are not possible with current style transfer applications.

@article{Reimann:2022:CSF,
  author = { Reimann, Max and Buchheim, Benito and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Controlling Strokes in Fast Neural Style Transfer using Content Transforms },
  journal = { The Visual Computer },
  year = { 2022 },
  volume = { 38 },
  number = { 12 },
  doi = { 10.1007/s00371-022-02518-x }
}

Design Space of Geometry-based Image Abstraction Techniques with Vectorization Applications

Lisa Ihde, Amir Semmo, Jürgen Döllner, and Matthias Trapp
International Conference on Computer Graphics, Visualization and Computer Vision (WSCG) 2022

Abstract, BibTeX, Paper (PDF)

The paper presents a new approach of optimized vectorization to generate stylized artifacts such as drawings with a plotter or cutouts with a laser cutter. For this, we developed a methodology for transformations between raster and vector space. More over, we identify semiotic aspects of Geometry-based Stylization Techniques (GSTs) and the combination with raster-based stylization techniques. Therefore, the system enables also Fused Stylization Techniques (FSTs).

@article{Ihde:2022:DSG,
  author = { Ihde, Lisa and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Design Space of Geometry-based Image Abstraction Techniques with Vectorization Applications },
  year = { 2022 },
  journal = { J. {WSCG} },
  volume = { 30 },
  number = { 1--2 },
  note = { Proceedings WSCG 2022 },
  doi = { 10.24132/JWSCG.2022.12 }
}

NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits

Paul L. Rosin, Yu-Kun Lai, David Mould, Ran Yi, Itamar Berger, Lars Doyle, Seungyong Lee, Chuan Li, Yong-Jin Liu, Amir Semmo, Ariel Shamir, Minjung Son, and Holger Winnemöller
Computational Visual Media 2022

Abstract, BibTeX, DOI, Paper (PDF)

Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset.

@article{Rosin:2022:NPR,
  author = { Rosin, Paul L. and Lai, Yu-Kun and Mould, David and Yi, Ran and Berger, Itamar and Doyle, Lars and Lee, Seungyong and Li, Chuan and Liu, Yong-Jin and Semmo, Amir and Shamir, Ariel and Son, Minjung and Winnem{\"o}ller, Holger },
  title = { NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits },
  journal = { Computational Visual Media },
  year = { 2022 },
  volume = { 8 },
  number = { 3 },
  pages = { 445--465 },
  publisher = { Springer },
  doi = { 10.1007/s41095-021-0255-3 }
}
2021

MotionViz: Artistic Visualization of Human Motion on Mobile Devices

Maximilian Mayer, Philipp Trenz, Sebastian Pasewaldt, Mandy Klingbeil, Jürgen Döllner, Matthias Trapp, and Amir Semmo
ACM SIGGRAPH Appy Hour 2021

Abstract, BibTeX, DOI, Video (YouTube)

We present MotionViz, an interactive iOS mobile app that enables users to amplify motion and dynamics in videos. MotionViz implements novel augmented reality and expressive rendering techniques in an end-to-end processing pipeline: multi-dimensional video data is captured, analyzed, and processed to render animated graphical elements that help express figures and actions. Through an easy-to-use graphical user interface, users can choose from a curated list of artistic motion visualization effects, including the overlay of animated silhouettes, halos, and contour lines. MotionViz is based on Apple’s LiDAR technology, accelerated image processing APIs, and dedicated Neural Engine for real-time on-device processing.

@inproceedings{Mayer:2021:MVA,
  author = { Mayer, Maximilian and Trenz, Philipp and Pasewaldt, Sebastian and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen and Trapp, Matthias and Semmo, Amir},
  title = { MotionViz: Artistic Visualization of Human Motion on Mobile Devices },
  year = { 2021 },
  location = { Virtual Event, USA },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3450415.3464398 }
}

Interactive Multi-level Stroke Control for Neural Style Transfer

Max Reimann, Benito Buchheim, Amir Semmo, Jürgen Döllner, and Matthias Trapp
International Conference on Cyberworlds 2021

Abstract, BibTeX, DOI, Paper (PDF)

We present StyleTune, a mobile app for interactive multi-level control of neural style transfers that facilitates creative adjustments of style elements and enables high output fidelity. In contrast to current mobile neural style transfer apps, StyleTune supports users to adjust both the size and orientation of style elements, such as brushstrokes and texture patches, on a global as well as local level. To this end, we propose a novel stroke-adaptive feed-forward style transfer network, that enables control over stroke size and intensity and allows a larger range of edits than current approaches. For additional level-of-control, we propose a network agnostic method for stroke-orientation adjustment by utilizing the rotation-variance of Convolutional Neural Networks (CNNs). To achieve high output fidelity, we further add a patch-based style transfer method that enables users to obtain output resolutions of more than 20 Megapixel (Mpix). Our approach empowers users to create many novel results that are not possible with current mobile neural style transfer apps.

@inproceedings{Reimann:2021:IMS,
  author = { Reimann, Max and Buchheim, Benito and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Interactive Multi-level Stroke Control for Neural Style Transfer },
  year = { 2021 },
  location = { Caen, France },
  booktitle = { Proceedings International Conference on Cyberworlds },
  publisher = { IEEE },
  pages = { 1--8 },
  doi = { 10.1109/CW52790.2021.00009 }
}

Service-based Analysis and Abstraction for Content Moderation of Digital Images

Moritz Hilscher, Hendrik Tjabben, Hendrik Rätz, Amir Semmo, Lonni Besançon, Jürgen Döllner, and Matthias Trapp
Graphics Interface 2021

Abstract, BibTeX, DOI, Paper (PDF), URL (OpenReview)

This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection.

@inproceedings{Hilscher:2021:SAA,
  author = { Hilscher, Moritz and Tjabben, Hendrik and R{\"a}tz, Hendrik and Semmo, Amir and Lonni, Besan\c{c}on and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { {Service-based Analysis and Abstraction for Content Moderation of Digital Images} },
  year = { 2021 },
  booktitle = { Proc. Graphics Interface },
  url = { https://openreview.net/forum?id=4j3avB-mrk }
}

Interactive Photo Editing on Smartphones via Intrinsic Decomposition

Sumit Shekhar, Max Reimann, Maximilian Mayer, Amir Semmo, Sebastian Pasewaldt, Jürgen Döllner, and Matthias Trapp
Computer Graphics Forum (Proceedings Eurographics) 2021

Abstract, BibTeX, DOI, Project Page, Paper - HQ (PDF), Paper - Optimized (PDF), Supplemental Material (PDF)

Intrinsic decomposition refers to the problem of estimating scene characteristics, such as albedo and shading, when one view or multiple views of a scene are provided. The inverse problem setting, where multiple unknowns are solved given a single known pixel-value, is highly under-constrained. When provided with correlating image and depth data, intrinsic scene decomposition can be facilitated using depth-based priors, which nowadays is easy to acquire with high-end smartphones by utilizing their depth sensors. In this work, we present a system for intrinsic decomposition of RGB-D images on smartphones and the algorithmic as well as design choices therein. Unlike state-of-the-art methods that assume only diffuse reflectance, we consider both diffuse and specular pixels. For this purpose, we present a novel specularity extraction algorithm based on a multi-scale intensity decomposition and chroma inpainting. At this, the diffuse component is further decomposed into albedo and shading components. We use an inertial proximal algorithm for non-convex optimization (iPiano) to ensure albedo sparsity. Our GPU-based visual processing is implemented on iOS via the Metal API and enables interactive performance on an iPhone 11 Pro. Further, a qualitative evaluation shows that we are able to obtain high-quality outputs. Furthermore, our proposed approach for specularity removal outperforms state-of-the-art approaches for real-world images, while our albedo and shading layer decomposition is faster than the prior work at a comparable output quality. Manifold applications such as recoloring, retexturing, relighting, appearance editing, and stylization are shown, each using the intrinsic layers obtained with our method and/or the corresponding depth data.

@article{Shekhar:2021:IPE,
  author = { Shekhar, Sumit and Reimann, Max and Mayer, Maximilian and Semmo, Amir and Sebastian, Pasewaldt and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { {Interactive Photo Editing on Smartphones via Intrinsic Decomposition} },
  year = { 2021 },
  journal = { Computer Graphics Forum },
  volume = { 40 },
  number = { 2 },
  pages = { 497--510 },
  doi = { 10.1111/cgf.142650 }
}

Teaching Data-driven Video Processing via Crowdsourced Data Collection

Max Reimann, Ole Wegen, Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
Eurographics Education Papers 2021

Abstract, BibTeX, DOI

This paper presents the concept and experience of teaching an undergraduate course on data-driven image and video processing. When designing visual effects that make use of Machine Learning (ML) models for image-based analysis or processing, the availability of training data typically represents a key limitation when it comes to feasibility and effect quality. The goal of our course is to enable students to implement new kinds of visual effects by acquiring datasets via crowdsourcing that are used to train ML models as part of an video processing pipeline. First, we propose our course structure and best practices that are involved with crowdsourced data acquisitions. We then discuss the key insights we gathered from an exceptional undergraduate seminar project that tackles the challenging domain of video annotation and learning. In particular, we focus on how to practically develop annotation tools and collect high-quality datasets using Amazon Mechanical Turk (MTurk) in the budget- and time-constrained classroom environment. We observe that implementing the full acquisition and learning pipeline is entirely feasible for a seminar project, imparts hands-on problem solving skills, and promotes undergraduate research.

@inproceedings{Reimann:2021:TDV,
  author = { Reimann, Max and Wegen, Ole and Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { {Teaching Data-driven Video Processing via Crowdsourced Data Collection} },
  year = { 2021 },
  location = { Vienna, Austria },
  series = { Proc. Eurographics Education Papers },
  publisher = { The Eurographics Association },
  doi = { 10.2312/eged.20211000 }
}
2020

Graphite: Interactive Photo-to-Drawing Stylization on Mobile Devices

Amir Semmo and Sebastian Pasewaldt
ACM SIGGRAPH Appy Hour 2020

Abstract, BibTeX, DOI

We present Graphite, an iOS mobile app that enables users to transform photos into drawings and illustrations with ease. Graphite implements a novel flow-aligned rendering approach that is based on the analysis of local image-feature directions. A stroke-based image stylization pipeline is parameterized to compute realistic directional hatching and contouring effects in real-time. Its art-direction enables users to selectively and locally fine-tune design mechanisms and variables—such as the level of detail, stroke granularity, degree of smudging, and sketchiness—using the Apple Pencil or touch gestures. In this respect, the looks of manifold artistic media can be simulated, including pencil, pen-and-ink, pastel, and blueprint illustrations. Graphite is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, interactive editing can be performed in real-time by utilizing the dedicated Neural Engine and GPU. Providing an in-app printing service, Graphite serves as a unique tool for creating personalized prints of the user's own digital artworks.

@inproceedings{Semmo:2020:GIP,
  author = { Semmo, Amir and Pasewaldt, Sebastian },
  title = { Graphite: Interactive Photo-to-Drawing Stylization on Mobile Devices },
  year = { 2020 },
  location = { Virtual Event, USA },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3388529.3407306 }
}

Reducing Affective Responses to Surgical Images and Videos through Stylization

Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie Pineau, El Hadi Sariali, Marc Soubeyrand, Rabah Taouachi, Tobias Isenberg, and Pierre Dragicevic
Computer Graphics Forum 2020

Abstract, BibTeX, DOI, Paper (PDF), TEDx Talk by Lonni Besançon

We present the first empirical study on using color manipulation and stylization to make surgery images/videos more palatable. While aversion to such material is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques to test them both on surgeons and lay people. While color manipulation techniques and many artistic methods were found unusable by surgeons, edge-preserving image smoothing yielded good results both for preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). We then conducted a second set of interview with surgeons to assess whether these methods could also be used on videos and derive good default parameters for information preservation. We provide extensive supplemental material at osf.io/4pfes/.

@inproceedings{BSBFPSSTID20,
  author = { Besan\c{c}on, Lonni and Semmo, Amir and Biau, David and Frachet, Bruno and Pineau, Virginie and Sariali, El Hadi and Soubeyrand, Marc and Taouachi, Rabah and Isenberg, Tobias and Dragicevic, Pierre },
  title = { Reducing Affective Responses to Surgical Images and Videos through Stylization },
  year = { 2020 },
  journal = { Computer Graphics Forum },
  volume = { 39 },
  number = { 1 },
  pages = { 462--483 },
  doi = { 10.1111/cgf.13886 }
}
2019

Consistent Filtering Of Videos And Dense Light-Fields Without Optic-Flow

Sumit Shekhar, Amir Semmo, Matthias Trapp, Okan Tarhan Tursun, Sebastian Pasewaldt, Karol Myszkowski, and Jürgen Döllner
24th International Symposium on Vision, Modeling, and Visualization 2019

Abstract, BibTeX, DOI, Project Page, Paper (PDF)

A convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters-originally designed for still images-to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies-due to per-image filtering-are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.

@inproceedings{SSTTPMD19,
  author = { Shekhar, Sumit and Semmo, Amir and Trapp, Matthias and Tursun, Okan Tarhan and Pasewaldt, Sebastian and Myszkowski, Karol and D{\"o}llner, J{\"u}rgen },
  title = { Consistent Filtering Of Videos And Dense Light-Fields Without Optic-Flow },
  year = { 2019 },
  location = { Rostock, Germany },
  series = { Proceedings Vision, Modeling and Visualization },
  publisher = { The Eurographics Association },
  pages = { 125--134 }
  doi = { 10.2312/vmv.20191326 }
}

ViVid: Depicting Dynamics in Stylized Live Photos

Amir Semmo, Max Reimann, Mandy Klingbeil, Sumit Shekhar, Matthias Trapp, and Jürgen Döllner
ACM SIGGRAPH Appy Hour 2019

Abstract, BibTeX, DOI, Paper (PDF)

We present ViVid, a mobile app for iOS that empowers users to express dynamics in stylized Live Photos. This app uses state-of-the-art computer-vision techniques based on convolutional neural networks to estimate motion in the video footage that is captured together with a photo. Based on these analytics and best practices of contemporary art, photos can be stylized as a pencil drawing or cartoon look that includes design elements to visually suggest motion, such as ghosts, motion lines and halos. Its interactive parameterizations enable users to filter and art-direct composition variables, such as color, size and opacity, of the stylization process. ViVid is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, the motion estimation is scheduled to utilize the dedicated neural engine and GPU in parallel, while shading-based image stylization is able to process the video footage in real-time. This way, the app provides a unique tool for creating lively photo stylizations with ease.

@inproceedings{SRKPSTD18,
  author = { Semmo, Amir and Reimann, Max and Klingbeil, Mandy and Shekhar, Sumit and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { ViVid: Depicting Dynamics in Stylized Live Photos },
  year = { 2019 },
  location = { Los Angeles, CA, USA },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  month = { 7 },
  publisher = { ACM },
  address = { New York },
  pages = { 8:1--8:2 },
  doi = { 10.1145/3305365.3329726 }
}

Locally Controllable Neural Style Transfer on Mobile Devices

Max Reimann, Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
The Visual Computer 2019

Abstract, BibTeX, DOI, Paper (PDF)

Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. In this work, we first propose a problem characterization of interactive style transfer representing a trade-off between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, we enhance state-of-the-art neural style transfer techniques by mask-based loss-terms that can be interactively parameterized by a generalized user interface to facilitate a creative and localized editing process. We report on a usability study and an online survey that demonstrate the ability of our app to transfer styles at improved semantic plausibility.

@article{RKPSTD19,
  author = { Reimann, Max and Klingbeil, Mandy and Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Locally Controllable Neural Style Transfer on Mobile Devices },
  journal = { The Visual Computer },
  year = { 2019 },
  volume = { 35 },
  number = { 11 },
  pages = { 1531--1547 },
  doi = { 10.1007/s00371-019-01654-1 }
}
2018

MaeSTrO: Mobile-Style Transfer Orchestration Using Adaptive Neural Networks

Max Reimann, Amir Semmo, Sebastian Pasewaldt, Mandy Klingbeil, and Jürgen Döllner
ACM SIGGRAPH Appy Hour 2018

Abstract, BibTeX, DOI, Paper (PDF)

We present MaeSTrO, a mobile app for image stylization that empowers users to direct, edit and perform a neural style transfer with creative control. The app uses iterative style transfer, multi-style generative and adaptive networks to compute and apply flexible yet comprehensive style models of arbitrary images at run-time. Compared to other mobile applications, MaeSTrO introduces an interactive user interface that empowers users to orchestrate style transfers in a two-stage process for an individual visual expression: first, initial semantic segmentation of a style image can be complemented by on-screen painting to direct sub-styles in a spatially-aware manner. Second, semantic masks can be virtually drawn on top of a content image to adjust neural activations within local image regions, and thus direct the transfer of learned sub-styles. This way, the general feed-forward neural style transfer is evolved towards an interactive tool that is able to consider composition variables and mechanisms of general artwork production, such as color, size and location-based filtering. MaeSTrO additionally enables users to define new styles directly on a device and synthesize high-quality images based on prior segmentations via a servicebased implementation of compute-intensive iterative style transfer techniques.

@inproceedings{RSPKD18,
  author = { Reimann, Max and Semmo, Amir and Pasewaldt, Sebastian and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen },
  title = { MaeSTrO: Mobile-Style Transfer Orchestration Using Adaptive Neural Networks },
  year = { 2018 },
  location = { Vancouver, BC, Canada },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3213779.3213783 }
}

MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks

Max Reimann, Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Matthias Trapp
International Conference on Cyberworlds 2018

Abstract, BibTeX, Paper (PDF), Presentation Slides (PDF), DOI

Mobile expressive rendering gained increasing popularity amongst users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, the neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles and media without deep prior knowledge of photo processing or editing. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization, e.g., with respect to image feature semantics or the user's ideas and interest. The goal of this work is to implement and enhance state-of-the-art neural style transfer techniques, providing a generalized user interface with interactive tools for local control that facilitate a creative editing process on mobile devices. At this, we first propose a problem characterization consisting of three goals that represent a trade-off between visual quality, run-time performance and ease of control. We then present MaeSTrO, a mobile app for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to direct a semantics-based composition and perform location-based filtering. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.

@inproceedings{RKPSDT18_2,
  author = { Reimann, Max and Klingbeil, Mandy and Pasewaldt, Sebastian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks },
  year = { 2018 },
  location = { Singapore },
  booktitle = { Proceedings International Conference on Cyberworlds },
  publisher = { IEEE },
  pages = { 9--16 },
  doi = { 10.1109/CW.2018.00016 }
}

MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics

Santiago Montesdeoca, Hock Soon Seah, Amir Semmo, Pierre Bénard, Romain Vergne, Joelle Thollot, and Davide Benvenuti
Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) 2018

Abstract, BibTeX, DOI, Paper (PDF), Source Code (GitHub), Project Page (artineering.io)

We propose a framework for expressive non-photorealistic rendering of 3D computer graphics. Our work focuses on enabling stylization pipelines with a wide range of control, thereby covering the interaction spectrum with real-time feedback. In addition, we introduce control semantics that allow cross-stylistic art-direction, which is demonstrated through our implemented watercolor, oil and charcoal stylizations. Our generalized control semantics and their style-specific mappings are designed to be extrapolated to other styles, by adhering to the same control scheme. We then share our implementation details by breaking down the framework and elaborating on its inner workings. Finally, we evaluate the usefulness of each level of control through a user study involving 20 experienced artists and engineers in the industry, who have collectively spent over 245 hours using our system. Our framework is implemented in Autodesk Maya and open-sourced through this publication, to facilitate adoption by artists and further development by the expressive research and development community.

@inproceedings{MSSBVTB18,
  author = { Montesdeoca, Santiago and Seah, Hock Soon and Semmo, Amir and B{\'e}nard, Pierre and Vergne, Romain and Thollot, Joelle and Benvenuti, Davide },
  title = { MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics },
  year = { 2018 },
  pages = { 9:1--9:11 },
  location = { Victoria, BC, Canada },
  booktitle = { Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3229147.3229162 }
}

Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization

Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie Pineau, El Hadi Sariali, Rabah Taouachi, Tobias Isenberg, and Pierre Dragicevic
Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) 2018

Abstract, BibTeX, DOI, Paper (PDF)

We present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.

@inproceedings{BSBFPSTID18,
  author = { Besan\c{c}on, Lonni and Semmo, Amir and Biau, David and Frachet, Bruno and Pineau, Virginie and Sariali, El Hadi and Taouachi, Rabah and Isenberg, Tobias and Dragicevic, Pierre },
  title = { Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization },
  year = { 2018 },
  pages = { 11:1--11:13 },
  location = { Victoria, BC, Canada },
  booktitle = { Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3229147.3229158 }
}

Approaches for Local Artistic Control of Mobile Neural Style Transfer

Max Reimann, Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Matthias Trapp
Expressive Poster Session 2018

Abstract, BibTeX, DOI, Paper (PDF)

This work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.

@inproceedings{RKPSDT18,
  author = { Reimann, Max and Klingbeil, Mandy and Pasewaldt, Sebastian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Approaches for Local Artistic Control of Mobile Neural Style Transfer },
  year = { 2018 },
  location = { Victoria, BC, Canada },
  booktitle = { Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3229147.3229188 }
}

Service-based Processing and Provisioning of Image-Abstraction Techniques

Marvin Richter, Maximilian Söchting, Amir Semmo, Jürgen Döllner, and Matthias Trapp
International Conference on Computer Graphics, Visualization and Computer Vision (WSCG) 2018

Abstract, BibTeX, DOI, Paper (PDF)

Digital images and image streams represent two major categories of media captured, delivered, and shared on the Web. Techniques for their analysis, classification, and processing are fundamental building blocks in today's digital media applications ranging from mobile image transformation apps to professional digital production suites. To efficiently process such digital media (1) independent of hardware requirements, (2) at different data complexity scales, (3) to yield high-quality results, poses several challenges for software frameworks and hardware systems, in particular for mobile devices. With respect to these aspects, using service-based architectures are a common approach to strive for. However, unlike geodata, there is currently no standard approach for service definition, implementation, and orchestration in the domain of digital images and videos. This paper presents an approach for service-based image processing and provisioning of processing techniques by the example of image-abstraction techniques. The generality and feasibility of the proposed system is demonstrated by different client applications that have been implemented for the Android Operation System, for Google's G-Suite Software-as-a-Service Infrastructure, as well as for Desktop systems. The performance of the system is discussed at the example of complex, resource-intensive image abstraction techniques, such as watercolor rendering.

@inproceedings{RSSDT2018,
  author = { Richter, Marvin and S{\"o}chting, Maximilian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Trapp, Matthias },
  title = { Service-based Processing and Provisioning of Image-Abstraction Techniques },
  year = { 2018 },
  location = { Plzen, Czech Republic  },
  series = { Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG) },
  pages = { 97--106 },
  url = { http://wscg.zcu.cz/WSCG2018/Short/P97-full.PDF },
  doi = { 10.24132/CSRN.2018.2802.13 }
}

Teaching Image-Processing Programming for Mobile Devices: A Software Development Perspective

Matthias Trapp, Sebastian Pasewaldt, Tobias Dürschmid, Amir Semmo, and Jürgen Döllner
Eurographics Education Papers 2018

Abstract, BibTeX, DOI, Paper (PDF)

In this paper we present a concept of a research course that teaches students in image processing as a building block of mobile applications. Our goal with this course is to teach theoretical foundations, practical skills in software development as well as scientific working principles to qualify graduates to start as fully-valued software developers or researchers. The course includes teaching and learning focused on the nature of small team research and development as encountered in the creative industries dealing with computer graphics, computer animation and game development. We discuss our curriculum design and issues in conducting undergraduate and graduate research that we have identified through four iterations of the course. Joint scientific demonstrations and publications of the students and their supervisors as well as quantitative and qualitative evaluation by students underline the success of the proposed concept. In particular, we observed that developing using a common software framework helps the students to jump start their course projects, while industry software processes such as branching coupled with a three-tier breakdown of project features helps them to structure and assess their progress.

@inproceedings{TPDSD18,
  author = { Trapp, Matthias and Pasewaldt, Sebastian and D{\"u}rschmid, Tobias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Teaching Image-Processing Programming for Mobile Devices: A Software Development Perspective },
  year = { 2018 },
  location = { Delft, Netherlands },
  series = { Proceedings Eurographics Education Papers },
  publisher = { The Eurographics Association },
  doi = { 10.2312/eged.20181002 }
}
2017

ProsumerFX: Mobile Design of Image Stylization Components

Tobias Dürschmid, Maximilian Söchting, Amir Semmo, Matthias Trapp, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2017

Abstract, BibTeX, DOI, Paper (PDF)

With the continuous advances of mobile graphics hardware, high-quality image stylization—e.g., based on image filtering, stroke-based rendering, and neural style transfer—is becoming feasible and increasingly used in casual creativity apps. The creative expression facilitated by these mobile apps, however, is typically limited with respect to the usage and application of pre-defined visual styles, which ultimately do not include their design and composition—an inherent requirement of prosumers. We present ProsumerFX, a GPU-based app that enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines. Furthermore, the presentation of the effects can be customized by modifying the icons, names, and order of parameters and presets. Thereby, the customized visual styles are defined as platform-independent effects and can be shared with other users via a web-based platform and database. Together with the presented mobile app, this system approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Thus, it satisfies the needs for creative expression of both professionals as well as the general public.

@inproceedings{DSSTD17,
  author = { D{\"u}rschmid, Tobias and S{\"o}chting, Maximilian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { ProsumerFX: Mobile Design of Image Stylization Components },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3139208 }
}

Challenges in User Experience Design of Image Filtering Apps

Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2017

Abstract, BibTeX, DOI, Paper (PDF)

Photo filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.

@inproceedings{KPSD17,
  author = { Klingbeil, Mandy and Pasewaldt, Sebastian and Semmo, Amir D{\"o}llner, J{\"u}rgen },
  title = { Challenges in User Experience Design of Image Filtering Apps },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3132803 }
}

Demo: Pictory - Neural Style Transfer and Editing with CoreML

Sebastian Pasewaldt, Amir Semmo, Mandy Klingbeil, and Jürgen Döllner
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo) 2017

Abstract, BibTeX, DOI, Paper (PDF)

This work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.

@inproceedings{PSKD17,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen },
  title = { Demo: Pictory - Neural Style Transfer and Editing with CoreML },
  year = { 2017 },
  location = { Bangkok, Thailand },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3132787.3132815 }
}

Pictory: Combining Neural Style Transfer and Image Filtering

Amir Semmo, Matthias Trapp, Jürgen Döllner, and Mandy Klingbeil
ACM SIGGRAPH Appy Hour 2017

Abstract, BibTeX, DOI, Paper (PDF)

This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.

@inproceedings{STDKD17,
  author = { Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen and Klingbeil, Mandy },
  title = { Pictory: Combining Neural Style Transfer and Image Filtering },
  year = { 2017 },
  location = { Los Angeles, California },
  pages = { 5:1--5:2 },
  series = { Proceedings SIGGRAPH Appy Hour },
  month = { 8 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3098900.3098906 }
}

Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?

Amir Semmo, Tobias Isenberg, and Jürgen Döllner
Proceedings International Symposium on Non-Photorealistic Animation and Rendering (NPAR) 2017

Abstract, BibTeX, DOI, Paper (PDF), Slides (PDF, 19.8 MiB)

In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.

@inproceedings{SID17,
  author = { Semmo, Amir and Isenberg, Tobias and D{\"o}llner, J{\"u}rgen },
  title = { Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering? },
  year = { 2017 },
  location = { Los Angeles, California },
  pages = { 5:1--5:13 },
  series = { Proceedings International Symposium on Non-Photorealistic Animation and Rendering (NPAR) },
  month = { 7 },
  editor = { Holger Winnem{\"o}ller and Lyn Bartram },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/3092919.3092920 }
}
2016

Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices

Amir Semmo, Tobias Dürschmid, Matthias Trapp, Mandy Klingbeil, Jürgen Döllner, and Sebastian Pasewaldt
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2016

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 34 MiB)

With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.

@inproceedings{SDTKDP16,
  author = { Semmo, Amir and D{\"u}rschmid, Tobias and Trapp, Matthias and Klingbeil, Mandy and D{\"o}llner, J{\"u}rgen and Pasewaldt, Sebastian },
  title = { Interactive Image Filtering with Multiple Levels-of-control on Mobile Devices },
  year = { 2016 },
  location = { Macau },
  pages = { 2:1--2:8 },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2999508.2999521 }
}

BeCasso: Artistic Image Processing and Editing on Mobile Devices

Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Frank Schlegel
SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo) 2016

Abstract, BibTeX, DOI, Paper (PDF)

BeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.

@inproceedings{PSDS16,
  author = { Pasewaldt, Sebastian and Semmo, Amir and D{\"o}llner, J{\"u}rgen and Schlegel, Frank },
  title = { Becasso: Artistic Image Processing and Editing on Mobile Devices },
  year = { 2016 },
  location = { Macau },
  pages = { 14:1--14:1 },
  series = { Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2999508.2999518 }
}

Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data

Amir Semmo
Doctoral Thesis, Hasso Plattner Institute at the University of Potsdam 2016

Abstract, BibTeX, URN, Thesis (PDF)

This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.

@phdthesis{S16,
  author = { Amir Semmo },
  title = { Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data },
  school = { Hasso Plattner Institute at the University of Potsdam },
  year = { 2016 },
  month = { 11 },
  address = { Potsdam, Germany },
  url = { http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-99525 }
}

BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices

Amir Semmo, Jürgen Döllner, and Frank Schlegel
ACM SIGGRAPH Appy Hour 2016

Abstract, BibTeX, DOI, Paper (PDF)

BeCasso is a mobile app that enables users to transform photos into an oil paint look that is inspired by traditional painting elements. In contrast to stroke-based approaches, the app uses state-of-the-art nonlinear image filtering techniques based on smoothed structure information to interactively synthesize oil paint renderings with soft color transitions. BeCasso empowers users to easily create aesthetic oil paint renderings by implementing a two-fold strategy. First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Second, it introduces a novel interaction approach that operates within the parameter spaces of the stylization effect to facilitate creative control over the visual output: on-screen painting enables users to locally adjust the appearance in image regions, e.g., to vary the level of abstraction, brush and stroke direction. This way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.

@inproceedings{SDS2016,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen and Schlegel, Frank },
  title = { BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices },
  booktitle = { Proceedings SIGGRAPH Appy Hour },
  year = { 2016 },
  month = { 7 },
  location = { Anaheim, California },
  pages = { 6:1--6:1 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2936744.2936750 }
}

Interactive Multi-scale Oil Paint Filtering on Mobile Devices

Amir Semmo, Mattias Trapp, Tobias Dürschmid, Jürgen Döllner, and Sebastian Pasewaldt
ACM SIGGRAPH Posters 2016

Abstract, BibTeX, DOI, Paper (PDF)

This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.

@inproceedings{STDDP2016,
  author = { Semmo, Amir and Trapp, Matthias and D{\"u}rschmid, Tobias and D{\"o}llner, J{\"u}rgen and Pasewaldt, Sebastian },
  title = { Interactive Multi-scale Oil Paint Filtering on Mobile Devices },
  booktitle = { SIGGRAPH Posters },
  year = { 2016 },
  location = { Anaheim, California },
  pages = { 42:1--42:2 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2945078.2945120 }
}

Interactive Oil Paint Filtering On Mobile Devices

Amir Semmo, Matthias Trapp, Sebastian Pasewaldt, and Jürgen Döllner
Expressive Poster Session 2016

Abstract, BibTeX, DOI, Paper (PDF)

Image stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.

@inproceedings{STPD2016,
  author = { Semmo, Amir and Trapp, Matthias and Pasewaldt, Sebastian and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Oil Paint Filtering On Mobile Devices },
  booktitle = { Expressive - Posters, Artworks, and Bridging Papers },
  year = { 2016 },
  editor = { Ergun Akleman, Lyn Bartram, Anıl Çamcı, Angus Forbes, Penousal Machado },
  publisher = { The Eurographics Association },
  doi = { 10.2312/exp.20161255 }
}

Image Stylization by Interactive Oil Paint Filtering

Amir Semmo, Daniel Limberger, Jan Eric Kyprianidis, and Jürgen Döllner
Computers & Graphics 2016

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 191 MiB)

This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.

@article{SLKD16,
  author = { Semmo, Amir and Limberger, Daniel and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Image Stylization by Interactive Oil Paint Filtering },
  journal = { Computers \& Graphics },
  year = { 2016 },
  volume = { 55 },
  pages = { 157--171 },
  doi = { 10.1016/j.cag.2015.12.001 }
}
2015

Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques

Amir Semmo, Matthias Trapp, Markus Jobst, and Jürgen Döllner
The Cartographic Journal (International Cartographic Conference) 2015

Abstract, BibTeX, DOI, Paper / Preprint (PDF), Link to Journal

In economy, society and personal life map-based, interactive geospatial visualization becomes a natural element of a growing number of applications and systems. The visualization of 3D geospatial information, however, raises the question how to represent the information in an effective way. Considerable research has been done in technology-driven directions in the fields of cartography and computer graphics (e.g., design principles, visualization techniques). Here, non-photorealistic rendering represents a promising visualization category–situated between both fields–that offers a large number of degrees for the cartography-oriented visual design of complex 2D and 3D geospatial information for a given application context. Still today, however, specifications and techniques for mapping cartographic design principles to the state-of-the-art rendering pipeline of 3D computer graphics remain to be explored. This paper revisits cartographic design principles for 3D geospatial visualization and introduces an extended 3D semiotic model that complies with the general, interactive visualization pipeline. Based on this model, we propose non-photorealistic rendering techniques to interactively synthesize cartographic renditions of basic feature types, such as terrain, water, and buildings. In particular, it includes a novel iconification concept to seamlessly interpolate between photorealistic and cartographic representations of 3D landmarks. Our work concludes with a discussion of open challenges in this field of research, including topics such as user interaction and evaluation.

@article{SD2015,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques },
  journal = { The Cartographic Journal },
  year = { 2015 },
  volume = { 52 },
  number = { 2 },
  pages = { 95--106 },
  doi = { 10.1080/00087041.2015.1119462 }
}

Image Stylization by Oil Paint Filtering using Color Palettes

Amir Semmo, Daniel Limberger, Jan Eric Kyprianidis, and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2015

Abstract, BibTeX, DOI, Paper (PDF), Filter Results (Images / ZIP, 191 MiB)

This paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs feature-aware recolorization using the dominant colors of the input image. In addition, an approach for real-time computation of paint textures is presented that builds on the smoothed structure adapted to the main feature contours of the quantized image. Our stylization technique leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting.

@inproceedings{SLKD14,
  author = { Semmo, Amir and Limberger, Daniel and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Image Stylization by Oil Paint Filtering using Color Palettes },
  booktitle = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  year = { 2015 },
  pages = { 149--158 },
  month = { 6 },
  doi = { 10.2312/exp.20151188 }
}

Interactive Image Filtering for Level-of-Abstraction Texturing of Virtual 3D Scenes

Amir Semmo and Jürgen Döllner
Computers & Graphics 2015

Abstract, BibTeX, DOI, Paper (PDF)

Texture mapping is a key technology in computer graphics. For the visual design of 3D scenes, in particular, effective texturing depends significantly on how important contents are expressed, e.g., by preserving global salient structures, and how their depiction is cognitively processed by the user in an application context. Edge-preserving image filtering is one key approach to address these concerns. Much research has focused on applying image filters in a post-process stage to generate artistically stylized depictions. However, these approaches generally do not preserve depth cues, which are important for the perception of 3D visualization (e.g., texture gradient). To this end, filtering is required that processes texture data coherently with respect to linear perspective and spatial relationships. In this work, we present an approach for texturing 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping and (2) for each mipmap level separately to enable a progressive level of abstraction, using (3) direct interaction interfaces to parameterize the visualization according to spatial, semantic, and thematic data. We demonstrate the potentials of our method by several applications using touch or natural language inputs to serve the different interests of users in specific information, including illustrative visualization, focus+context visualization, geometric detail removal, and semantic depth of field. The approach supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering. In addition, it seamlessly integrates into real-time rendering pipelines, and is extensible for custom interaction techniques.

@article{SD2015,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Image Filtering for Level-of-Abstraction Texturing of Virtual 3D Scenes },
  journal = { Computers \& Graphics },
  year = { 2015 },
  volume = { 52 },
  pages = { 181--198 },
  doi = { 10.1016/j.cag.2015.02.001 }
}

Interactive Rendering and Stylization of Transportation Networks Using Distance Fields

Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP) 2015

Abstract, BibTeX, DOI, Paper (PDF)

Transportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.

@inproceedings{TSD2015,
  author = { Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Rendering and Stylization of Transportation Networks Using Distance Fields },
  booktitle = { Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP) },
  year = { 2015 },
  pages = { 207--219 },
  doi = { 10.5220/0005310502070219 }
}
2014

An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments

Amir Semmo and Jürgen Döllner
Proceedings 2nd ACM SIGSPATIAL Workshop on MapInteraction 2014

Abstract, BibTeX, DOI, Paper (PDF)

3D geovirtual environments constitute effective media for the analysis and communication of complex geospatial data. Today, these environments are often visualized using static graphical variants (e.g., 2D maps, 3D photorealistic) from which a user is able to choose from. To serve the different interests of users in specific information, however, the spatial and thematic granularity at which model contents are represented (i.e., level of abstraction) should be dynamically adapted to the user's context, which requires specialized interaction techniques for parameterization. In this work, we present a framework that enables interaction interfaces to parameterize the level-of-abstraction visualization according to spatial, semantic, and thematic data. The framework is implemented in a visualization system that provides image-based rendering techniques for context-aware abstraction and highlighting. Using touch and natural language interfaces, we demonstrate its versatile application to geospatial tasks, including exploration, navigation, and orientation.

@inproceedings{SD2014_3,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments },
  booktitle = { Proceedings 2nd ACM SIGSPATIAL Workshop on MapInteraction (MapInteract) },
  year = { 2014 },
  month = { 11 },
  location = { Dallas/Fort Worth, Texas },
  pages = { 43--49 },
  publisher = { ACM },
  address = { New York },
  doi = { 10.1145/2677068.2677072 }
}

Multi-Perspective 3D Panoramas

Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
International Journal of Geographical Information Science (IJGIS) 2014

Abstract, BibTeX, DOI, Paper (PDF)

This article presents multi-perspective 3D panoramas that focus on visualizing 3D geovirtual environments (3D GeoVEs) for navigation and exploration tasks. Their key element, a multi-perspective view, seamlessly combines what is seen from multiple viewpoints into a single image. This approach facilitates thepresentation of information for virtual 3D city and landscape models, particularly by reducing occlusions, increasing screen-space utilization, and providing additional context within a single image. We complement multi-perspective views with cartographic visualization techniques to stylize features according to their semantics and highlight important or prioritized information. When combined, both techniques constitute the core implementation of interactive, multi-perspective 3D panoramas. They offer a large number of effective means for visual communication of 3D spatial information, a high degree of customization with respect to cartographic design, and manifold applications in different domains. We discuss design decisions of 3D panoramas for the exploration of and navigation in 3D GeoVEs. We also discuss a preliminary user study that indicates that 3D panoramas are a promising approach for navigation systems using 3D GeoVEs.

@article{PSTD2014,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Multi-Perspective 3D Panoramas },
  journal = { International Journal of Geographical Information Science (IJGIS) },
  year = { 2014 },
  volume = { 28 },
  pages = { 2030--2051 },
  number = { 10 },
  doi = { 10.1080/13658816.2014.922686 }
}

Oil Paint Filtering Using Color Palettes For Colorization

Amir Semmo and Jürgen Döllner
Expressive Poster Session 2014

Abstract, BibTeX, Paper (PDF)

We present a novel technique for oil paint filtering that uses color palettes for colorization. First, dominant feature-aware colors are derived from the input image via entropy-based metrics. Seed pixels are then determined and propagated to the remaining pixels by adopting the optimization framework of Levin et al. [2004] for feature-aware colorization. Finally, the quantized output is combined with flow-based highlights and contour lines to simulate paint texture. Our technique leads to homogeneous outputs in the color domain and enables interactive control over color definitions.

@misc{SD2014_2,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Oil Paint Filtering Using Color Palettes For Colorization },
  booktitle = { Expressive Poster Session },
  year = { 2014 }
}

Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes

Amir Semmo and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2014

Abstract, BibTeX, DOI, Paper (PDF), Additional Material (PDF), Video (Youtube)

Texture mapping is a key technology in computer graphics for visual design of rendered 3D scenes. An effective information transfer of surface properties, encoded by textures, however, depends significantly on how important information is highlighted and cognitively processed by the user in an application context. Edge-preserving image filtering is a promising approach to address this concern while preserving global salient structures. Much research has focused on applying image filters in a post-process stage to foster an artistically stylized rendering, but these approaches are generally not able to preserve depth cues important for 3D visualization (e.g., texture gradient). To this end, filtering that processes texture data coherently with respect to linear perspective and spatial relationships is required. In this work, we present a system that enables to process textured 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping, and (2) for each mipmap level separately to enable a progressive level of abstraction. We demonstrate the potentials of our methods on several applications, including illustrative visualization, focus+context visualization, geometric detail removal, and depth of field. Our system supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering.

@inproceedings{SD14,
  author = { Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes },
  booktitle = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  year = { 2014 },
  pages = { 5--14 },
  month = { 8 },
  doi = { 10.1145/2630099.2630101 }
}
2013

Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models

Roland Lux, Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings of 11th Theory and Practice of Computer Graphics 2013 Conference (TP.CG) 2013

Abstract, BibTeX, Vido (Youtube), Paper (PDF)

This paper presents a novel interactive rendering technique for creating and editing shadings for man-made objects in technical 3D visualizations. In contrast to shading approaches that use intensities computed based on surface normals (e.g., Phong, Gooch, Toon shading), the presented approach uses one-dimensional gradient textures, which can be parametrized and interactively manipulated based on per-object bounding volume approximations. The fully hardware-accelerated rendering technique is based on projective texture mapping and customizable intensity transfer functions. A provided performance evaluation shows comparable results to traditional normal-based shading approaches. The work also introduce simple direct-manipulation metaphors that enables interactive user control of the gradient texture alignment and intensity transfer functions.

@inproceedings{LTSD13,
  author = { Lux, Roland and Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models },
  booktitle = { Proceedings of 11th Theory and Practice of Computer Graphics Conference (TP.CG) },
  year = { 2013 },
  editor = { Silvester Czanner, Wen Tang },
  pages = { 101--108 },
  month = { 9 },
  publisher = { The Eurographics Association },
  isbn = { 978-3-905673-98-2 }
}

Real-Time Rendering of Water Surfaces with Cartography-Oriented Design

Amir Semmo, Jan Eric Kyprianidis, Matthias Trapp, and Jürgen Döllner
Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2013

Abstract, BibTeX, DOI, Paper (PDF), Presentation Slides (PDF), Video (Youtube)

More than 70% of the Earth's surface is covered by oceans, seas, and lakes, making water surfaces one of the primary elements in geospatial visualization. Traditional approaches in computer graphics simulate and animate water surfaces in the most realistic ways. However, to improve orientation, navigation, and analysis tasks within 3D virtual environments, these surfaces need to be carefully designed to enhance shape perception and land-water distinction. We present an interactive system that renders water surfaces with cartography-oriented design using the conventions of mapmakers. Our approach is based on the observation that hand-drawn maps utilize and align texture features to shorelines with non-linear distance to improve figure-ground perception and express motion. To obtain local orientation and principal curvature directions, first, our system computes distance and feature-aligned distance maps. Given these maps, waterlining, water stippling, contour-hatching, and labeling are applied in real-time with spatial and temporal coherence. The presented methods can be useful for map exploration, landscaping, urban planning, and disaster management, which is demonstrated by various real-world virtual 3D city and landscape models.

@inproceedings{SKTD13,
  author = { Semmo, Amir and Kyprianidis, Jan Eric and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Real-Time Rendering of Water Surfaces with Cartography-Oriented Design },
  year = { 2013 },
  series = { Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) },
  pages = { 5--14 },
  month = { 7 },
  doi = { 10.1145/2487276.2487277 }
}
2012

Towards Comprehensible Digital 3D Maps

Sebastian Pasewaldt, Amir Semmo, Matthias Trapp, and Jürgen Döllner
Proceedings Service-Oriented Mapping (SOMAP) 2012

Abstract, BibTeX, Paper (PDF), Slides (PDF)

Digital mapping services have become fundamental tools in economy and society to provide domain experts and non-experts with customized, multi-layered map contents. In particular because of the continuous advancements in the acquisition, provision, and visualization of virtual 3D city and landscape models, 3D mapping services, today, represent key components to a growing number of applications, like car navigation, education, or disaster management. However, current systems and applications providing digital 3D maps are faced by drawbacks and limitations, such as occlusion, visual clutter, or insufficient use of screen space, that impact an effective comprehension of geoinformation. To this end, cartographers and computer graphics engineers developed design guidelines, rendering and visualization techniques that aim to increase the effectiveness and expressiveness of digital 3D maps, but whose seamless combination has yet to be achieved. This work discusses potentials of digital 3D maps that are based on combining cartography-oriented rendering techniques and multi-perspective views. For this purpose, a classification of cartographic design principles, visualization techniques, as well as suitable combinations are identified that aid comprehension of digital 3D maps. According to this classification, a prototypical implementation demonstrates the benefits of multi-perspective and non-photorealistic rendering techniques for visualization of 3D map contents. In particular, it enables (1) a seamless combination of cartography-oriented and photorealistic graphic styles while (2) increasing screen-space utilization, and (3) simultaneously directing a viewer’s gaze to important or prioritized information.

@inproceedings{PSTD12,
  author = { Pasewaldt, Sebastian and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Towards Comprehensible Digital 3D Maps },
  booktitle = { Service-Oriented Mapping (SOMAP) },
  year = { 2012 },
  editor = { Markus Jobst },
  pages = { 261--276 },
  month = { 11 },
  organization = { International Cartographic Association },
  publisher = { Jobstmedia Management Verlag, Wien }
}

Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments

Juri Engel, Amir Semmo, Matthias Trapp, and Jürgen Döllner
Proceedings 18th International Workshop on Vision, Modeling and Visualization (VMV) 2012

Abstract, BibTeX, DOI, Paper (PDF)

Using colors for thematic mapping is a fundamental approach in visualization, and has become essential for 3D virtual environments to effectively communicate multidimensional, thematic information. Preserving depth cues within these environments to emphasize spatial relations between geospatial features remains an important issue.A variety of rendering techniques have been developed to preserve depth cues in 3D information visualization, including shading, global illumination, and image stylization. However, these techniques alter color values, which may lead to ambiguity in a color mapping and loss of information. Depending on the applied rendering techniques and color mapping, this loss should be reduced while still preserving depth cues when communicating thematic information. This paper presents the results of a quantitative and qualitative user study that evaluates the impact of rendering techniques on information and spatial perception when using visualization of thematic data in 3D virtual environments. We report the results of this study with respect to four perception-related tasks, showing significant differences in error rate and task completion time for different rendering techniques and color mappings.

@inproceedings{ESTD13,
  author = { Engel, Juri and Semmo, Amir and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Evaluating the Perceptual Impact of Rendering Techniques
	on Thematic Color Mappings in 3D Virtual Environments },
  booktitle = { Proceedings Vision, Modeling \& Visualization },
  year = { 2013 },
  pages = { 25--32 },
  doi = { 10.2312/PE.VMV.VMV13.025-032 }
}

Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions

Amir Semmo, Matthias Trapp, Jan Eric Kyprianidis, and Jürgen Döllner
Computer Graphics Forum (Proceedings EuroVis) 2012

Abstract, BibTeX, DOI, Paper (PDF), Video (Youtube)

Virtual 3D city models play an important role in the communication of complex geospatial information in a growing number of applications, such as urban planning, navigation, tourist information, and disaster management. In general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer's gaze to prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-guided visualization based on user interaction or dynamically changing thematic information, a combination of different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a concept and an implementation of a system that enables different presentation styles, their seamless integration within a single view, and parametrized transitions between them, which are defined according to tasks, camera view, and image resolution. The paper outlines potential usage scenarios and application fields together with a performance evaluation of the implementation.

@article{STKD12,
  author = { Semmo, Amir and Trapp, Matthias and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions },
  journal = { Computer Graphics Forum },
  year = { 2012 },
  volume = { 31 },
  pages = { 885--894 },
  number = { 3 },
  note = { Proceedings EuroVis 2012 },
  doi = { 10.1111/j.1467-8659.2012.03081.x }
}

Concepts for Cartography-Oriented Visualization of Virtual 3D City Models

Amir Semmo, Dieter Hildebrandt, Matthias Trapp, and Jürgen Döllner
Photogrammetrie - Fernerkundung - Geoinformation (PFG) 2012

Abstract, BibTeX, DOI, Paper (PDF)

Virtual 3D city models serve as an effective medium with manifold applications in geoinformation systems and services. To date, most 3D city models are visualized using photorealistic graphics. But an effective communication of geoinformation significantly depends on how important information is designed and cognitively processed in the given application context. One possibility to visually emphasize important information is based on non-photorealistic rendering, which comprehends artistic depiction styles and is characterized by its expressiveness and communication aspects. However, a direct application of non-photorealistic rendering techniques primarily results in monotonic visualization that lacks cartographic design aspects. In this work, we present concepts for cartography-oriented visualization of virtual 3D city models. These are based on coupling non-photorealistic rendering techniques and semantics-based information for a user, context, and media-dependent representation of thematic information. This work highlights challenges for cartography-oriented visualization of 3D geovirtual environments, presents stylization techniques and discusses their applications and ideas for a standardized visualization. In particular, the presented concepts enable a real-time and dynamic visualization of thematic geoinformation.

@article{SHTD2012,
  author = { Semmo, Amir and Hildebrandt, Dieter and Trapp, Matthias and D{\"o}llner, J{\"u}rgen },
  title = { Concepts for Cartography-Oriented Visualization of Virtual 3D City Models },
  journal = { Photogrammetrie - Fernerkundung - Geoinformation (PFG) },
  year = { 2012 },
  pages = { 455--465 },
  number = { 4 },
  doi = { 10.1127/1432-8364/2012/0131 },
  issn = { 1432-8364 },
  publisher = { E. Schweizerbart'sche Verlagsbuchhandlung }
}

Colonia 3D - Communication of Virtual 3D Reconstructions in Public Spaces

Matthias Trapp, Amir Semmo, Rafael Pokorski, Claus-Daniel Herrmann, Jürgen Döllner, Michael Eichhorn, and Michael Heinzelmann
International Journal of Heritage in the Digital Era (IJHDE) 2012

Abstract, BibTeX, DOI, Paper (PDF)

The communication of cultural heritage in public spaces such as museums or exhibitions, gain more and more importance during the last years. The possibilities of interactive 3D applications open a new degree of freedom beyond the mere presentation of static visualizations, such as pre-produced video or image data. A user is now able to directly interact with 3D virtual environments that enable the depiction and exploration of digital cultural heritage artifacts in real-time. However, such technology requires concepts and strategies for guiding a user throughout these scenarios, since varying levels of experiences within interactive media can be assumed. This paper presents a concept as well as implementation for communication of digital cultural heritage in public spaces, by example of the project Roman Cologne. It describes the results achieved by an interdisciplinary team of archaeologists, designers, and computer graphics engineers with the aim to virtually reconstruct an interactive high-detail 3D city model of Roman Cologne.

@article{TSPHDEH12,
  author = { Trapp, Matthias and Semmo, Amir and Pokorski, Rafael and Herrmann,
	Claus-Daniel and D{\"o}llner, J{\"u}rgen and Eichhorn, Michael and Heinzelmann, Michael },
  title = { Colonia 3D - Communication of Virtual 3D Reconstructions in Public Spaces },
  journal = { International Journal of Heritage in the Digital Era (IJHDE) },
  year = { 2012 },
  volume = { 1 },
  pages = { 45--74 },
  number = { 1 },
  month = { 1 },
  doi = { 10.1260/2047-4970.1.1.45 },
  editor = { Marinos Ioannides },
  publisher = { Multi-Science Publishing }
}
2011

Colonia3D

Matthias Trapp, Amir Semmo, and Jürgen Döllner
Proceedings 9. Konferenz Kultur und Informatik - Multimediale Systeme 2011

Abstract, BibTeX, Paper (PDF)

Dieser Beitrag stellt die Ergebnisse des interdisziplinären Projektes Colonia3D - Visualisierung des Römischen Kölns vor. Die digitale 3D Rekonstruktion des antiken Köln ist das Ergebnis eines gemeinsamen Forschungsprojekts des Archäologischen Instituts der Universität zu Köln, der Köln International School of Design (KISD) der Fachhochschule Köln, des Hasso-Plattner Instituts an der Universität Potsdam und des Römisch Germanischen Museums (RGM) Köln. Der Beitrag präsentiert die wesentlichen Konzepte dieses interaktiven, auf Museen ausgerichteten 3D-Informationssystems, beschreibt verschiedene Präsentationsmodi und deren technische Umsetzung. Er diskutiert Vorgehensweisen und Interaktionskonzepte, die den Benutzer während der Erkundung und Bewegung im virtuellen 3D-Stadtmodell unterstützen. Weiter werden die Techniken für den Austausch, die Aufbereitung und die Optimierung komplexer 3D-Datensätze beschrieben sowie Potenziale für digitale Museen und Ausstellungen skizziert. Der vorgestellte Ansatz stellt insbesondere eine IT-Lösung für einen vereinfachten, räumlich-kontextintegrierten informellen Wissenszugang zu archäologischer Fachinformation dar.

@inproceedings{TSD11,
  author = { Trapp, Matthias and Semmo, Amir and D{\"o}llner, J{\"u}rgen },
  title = { Colonia3D },
  booktitle = { Tagungsband der 9. Konferenz Kultur und Informatik - Multimediale Systeme },
  year = { 2011 },
  pages = { 201--212 },
  month = { 5 },
  publisher = { Werner H{\"u}lsbusch Verlag }
}

Ansätze zur kartographischen Gestaltung von 3D-Stadtmodellen

Amir Semmo, Matthias, Trapp, and Jürgen Döllner
Proceedings 31. Wissenschaftlich-Technische Jahrestagung der DGPF 2011

Abstract, BibTeX, Paper (PDF)

Interaktive virtuelle 3D-Stadtmodelle haben sich zu einem bewährten Medium für die effektive und effiziente Kommunikation von Geoinformation entwickelt. Sie präsentieren eine spezialisierte Form geovirtueller Umgebungen und sind gekennzeichnet durch ein zugrunde liegendes 3D-Geländemodell, einer darin befindlichen 3D-Bebauung sowie des dazu komplementären Straßen-, Grünflächen- und Naturraumes. 3D-Stadtmodell-Systeme ermöglichen es dem Nutzer, sich im Modell interaktiv zu bewegen und sie stellen die Grundfunktionen für die Exploration, Analyse, Präsentation und das Editieren der raumbezogenen Information bereit. Besonders im Gebiet der kartenähnlichen und kartenverwandten 3D-Darstellungen stellen u.a. automatische Verfahren und Techniken zur Stilisierung und Abstraktion von Objekten eines 3D Stadtmodell ein Hauptproblem für die interaktive 3D-Bildsynthese dar. Hier spielt insbesondere die Abstraktion und Illustration potentiell wichtiger Information und somit die Reduzierung der kognitiven Belastung des Nutzers eine tragende Rolle. Diesbezüglich sind Verfahren und Techniken zur nicht-photorealistischen Bildsynthese ein bewährtes Mittel der Computergrafik, deren direkte Anwendung auf ein komplettes 3D-Stadtmodell jedoch häufig monotone sowie gestalterisch und kartographisch stark eingeschränkte Resultate liefert. Eine effiziente und kontextsensitive Kommunikation von 3D-Geoinformation bedarf jedoch der Kopplung von Objektsemantik und Abstraktionsverfahren. Diese Arbeit präsentiert ein Konzept und dessen Umsetzung, das die Auswahl und Parametrisierung von nicht-photorealistischen Darstellungstechniken auf Basis von Objektsemantiken erlaubt (Abbildung 1). Dies ermöglicht die Zuweisung unterschiedlicher automatischer Abstraktionstechniken zu Objekten und Objektgruppen. Der vorgestellte Ansatz ist echtzeitfähig und erlaubt eine interaktive Klassifikation von Objekten und Features zur Laufzeit, wodurch sich u.a. Szenarien zur interaktiven Exploration von thematisch-stilisierten Features bzw. feature-bezogenen Daten visualisieren lassen. Dieser Ansatz eröffnet Möglichkeiten für eine gezielte und systematische kartographische Gestaltung von 3D-Stadtmodellen sowie deren echtzeitfähige Implementierung durch entsprechende 3D-Visualisierungsdienste.

@INPROCEEDINGS{STD11,
  author = {Semmo, Amir and Trapp, Matthias and J{\"u}rgen D{\"o}llner},
  title = {Ans{\"a}tze zur kartographischen Gestaltung von 3D-Stadtmodellen},
  booktitle = {31. Wissenschaftlich-Technische Jahrestagung der DGPF},
  year = {2011},
  pages = {473--482}
}
2010

Anisotropic Kuwahara Filtering with Polynomial Weighting Functions

Jan Eric Kyprianidis, Amir Semmo, Henry Kang, and Jürgen Döllner
NPAR Poster Session / Proceedings EG UK Theory and Practice of Computer Graphics (TP.CG) 2010

Abstract, BibTeX, Paper (PDF)

In this work we present new weighting functions for the anisotropic Kuwahara filter. The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features. For the smoothing process, the anisotropic Kuwahara filter uses weighting functions that use convolution in their definition. For an efficient implementation, these weighting functions are therefore usually sampled into a texture map. By contrast, our new weighting functions do not require convolution and can be efficiently computed directly during the filtering in real-time. We show that our approach creates output of similar quality as the original anisotropic Kuwahara filter and present an evaluation scheme to compute the new weighting functions efficiently by using rotational symmetries.

@inproceedings{KSKD10b,
  author = { Kyprianidis, Jan Eric and Semmo, Amir and Kang, Henry and D{\"o}llner, J{\"u}rgen },
  title = { Anisotropic Kuwahara Filtering with Polynomial Weighting Functions },
  booktitle = { Proc. EG UK Theory and Practice of Computer Graphics },
  year = { 2010 },
  pages = { 25--30 },
  month = { 9 }
}

@misc{KSKD10a,
  author = { Kyprianidis, Jan Eric and Semmo, Amir and Kang, Henry and D{\"o}llner, J{\"u}rgen },
  title = { Anisotropic Kuwahara Filtering with Polynomial Weighting Functions },
  booktitle = { NPAR Poster Session },
  month = { 6 },
  year = { 2010 }
}

Automated Image-Based Abstraction of Aerial Images

Amir Semmo, Jan Eric Kyprianidis, and Jürgen Döllner
Proceedings of 13th AGILE International Conference on Geographic Information Science 2010

Abstract, BibTeX, DOI, Paper (PDF)

Aerial images represent a fundamental type of geodata with a broad range of applications in GIS and geovisualization. The perception and cognitive processing of aerial images by the human, however, still is faced with the specific limitations of photorealistic depictions such as low contrast areas, unsharp object borders as well as visual noise. In this paper we present a novel technique to automatically abstract aerial images that enhances visual clarity and generalizes the contents of aerial images to improve their perception and recognition. The technique applies non-photorealistic image processing by smoothing local image regions with low contrast and emphasizing edges in image regions with high contrast. To handle the abstraction of large images, we introduce an image tiling procedure that is optimized for post-processing images on GPUs and avoids visible artifacts across junctions. This is technically achieved by filtering additional connection tiles that overlap the main tiles of the input image. The technique also allows the generation of different levels of abstraction for aerial images by computing a mipmap pyramid, where each of the mipmap levels is filtered with adapted abstraction parameters. These mipmaps can then be used to perform level-of-detail rendering of abstracted aerial images. Finally, the paper contributes a study to aerial image abstraction by analyzing the results of the abstraction process on distinctive visible elements in common aerial image types. In particular, we have identified a high abstraction straction potential in landscape images and a higher benefit from edge enhancement in urban environments.

@incollection{SKD10,
  author = { Semmo, Amir and Kyprianidis, Jan Eric and D{\"o}llner, J{\"u}rgen },
  title = { Automated Image-Based Abstraction of Aerial Images },
  booktitle = { Geospatial Thinking },
  publisher = { Springer },
  year = { 2010 },
  editor = { Painho, Marco and Santos, Maribel Yasmina and Pundt, Hardy },
  series = { Lecture Notes in Geoinformation and Cartography },
  pages = { 359--378 },
  month = { 5 },
  doi = { 10.1007/978-3-642-12326-9_19 }
}

ContextLua: Dynamic Behavioral Variations in Computer Games

Benjamin Hosain Wasty, Amir Semmo, Malte Appeltauer, Bastian Steinert, and Robert Hirschfeld
Proceedings of 2nd International Workshop on Context-Oriented Programming 2010

Abstract, BibTeX, Paper (ACM DL)

Behavioral variations are central to modern computer games as they are making the gameplay a more interesting user experience. However, these variations significantly add to the implementation complexity. We discuss the domain of computer games with respect to dynamic behavioral variations and argue that context-oriented programming is of special interest for this domain. This motivates our extension to the dynamic scripting language Lua, which is frequently used in the development of computer games. Our newly provided programming constructs allow game developers to use layers for defining and activating variations of the basic gameplay.

@inproceedings{WSASH10,
  author = { Hosain Wasty, Benjamin and Semmo, Amir and Appeltauer, Malte and Steinert, Bastian and Hirschfeld, Robert },
  title = { ContextLua: Dynamic Behavioral Variations in Computer Games },
  booktitle = { Proceedings of the 2nd International Workshop on Context-Oriented Programming },
  year = { 2010 },
  pages = { 5:1--5:6 },
  doi = { 10.1145/1930021.1930026 }
}

August 2020

Graphite: Interactive Photo-to-Drawing Stylization on Mobile Devices

SIGGRAPH (Appy Hour), Virtual Event

October 2018

MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks

Cyberworlds 2018, Singapore

August 2018

Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization

Expressive 2018, Victoria, BC, Canada

August 2018

MaeSTrO: Mobile-Style Transfer Orchestration Using Adaptive Neural Networks

SIGGRAPH (Appy Hour), Vancouver, BC, Canada

November 2017

Pictory - Neural Style Transfer and Editing with CoreML

SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo Session), Bangkok, Thailand

August 2017

Pictory: Combining Neural Style Transfer and Image Filtering

SIGGRAPH (Appy Hour), Los Angeles, CA, USA

July 2017

Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?

Expressive 2017, Los Angeles, CA, USA

December 2016

Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices

SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Macao

December 2016

BeCasso: Artistic Image Processing and Editing on Mobile Devices

SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo Session), Macao

July 2016

Interactive Multi-scale Oil Paint Filtering on Mobile Devices

SIGGRAPH (Poster Session), Anaheim, CA, USA

July 2016

BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices

SIGGRAPH (Appy Hour), Anaheim, CA, USA

May 2016

Interactive Oil Paint Filtering On Mobile Devices

Expressive 2016 (Poster Session), Lisbon, Portugal

November 2015

Konzepte und Techniken für das kartographische Design von 3D geovirtuellen Umgebungen

SGK-Herbsttagung, Muttenz, Switzerland

August 2015

Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques

International Cartographic Conference 2015, Rio de Janeiro, Brazil

June 2015

Image Stylization by Oil Paint Filtering using Color Palettes

Expressive 2015, Istanbul, Turkey

November 2014

An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments

ACM SIGSPATIAL MapInteract 2014, Dallas, TX, USA

August 2014

Image Filtering for Interactive Level-of-Abstraction Visualization of 3D Scenes

Expressive 2014, Vancouver, Canada

September 2013

Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments

International Workshop on Vision, Modeling and Visualization (VMV) 2013, Lugano, Switzerland

July 2013

Real-Time Rendering of Water Surfaces with Cartography-Oriented Design

Expressive 2013, Anaheim, CA, USA

June 2012

Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions

Eurographics Conference on Visualization (EuroVis) 2012, Vienna, Austria

June 2012

Cartography-Oriented Visualization of Virtual 3D City Models based on Level-of-Abstraction Transitions

Hasso Plattner Institute / Research School, Potsdam, Germany

May 2012

Colonia3D

Kultur und Informatik Conference, Berlin, Germany

April 2011

Ansaetze zur kartographischen Gestaltung von 3D-Stadtmodellen

31. Wissenschaftlich-Technische Jahrestagung der DGPF, Mainz, Germany

May 2010

Automated Image-Based Abstraction of Aerial Images

AGILE International Conference on Geographic Information Science, Guimaraes, Portugal

2015/16 : Winter Term / Summer Term
  • Image and Video Processing with OpenGL ES (Seminar)

    seminar (BA), tutor

2014/15 : Winter Term / Summer Term
  • Geovisualization

    seminar (MA), tutor

  • Introduction to Visualization

    lecture (MA), tutor

2013/14 : Winter Term / Summer Term
  • Geovisualization

    seminar (MA), tutor

  • Image & Video Processing

    seminar (BA), tutor

  • Video Analysis, Abstraction, and Summarization

    project (BA), tutor

2012/13 : Winter Term / Summer Term
  • Computergraphics II

    lecture (BA), tutor & co-lecturer

  • Graphics Programming with OpenGL and C++

    lecture/seminar (BA), tutor & lecturer

2011/12 : Winter Term / Summer Term
  • Real-Time Rendering Techniques

    seminar (MA), tutor

  • Computergraphics I & Computergraphics II

    lecture (BA), tutor

Committees and Chairing
Eurographics 2022 (Short Papers Program Committee), Expressive 2018/2019 (Publicity Chair), Expressive 2017 (Program Committee), GeoVIS 2015 - ISPRS Geospatial Week (Program Committee)

Reviewing
SIGGRAPH (2014, 2021), IEEE Transactions on Visualization and Computer Graphics (2015, 2017, 2018, 2019), IEEE Transactions on Image Processing (2018), Computers & Graphics (2016, 2018, 2019, 2020, 2022), Eurographics (2017, 2022), IEEE VIS / SciVis (2017), Eurographics Conference on Visualization / EuroVis (2014), IEEE Pacific Visualization (2013), IEEE MultiMedia (2016), International Journal of Geographical Information Science (2013), The Visual Computer (2012, 2017), Expressive (2017, 2019), Graphics Interface (2017), Vision, Modeling and Visualization (2016)

Memberships
ACM, ACM SIGGRAPH, IEEE

Awards & Honors
Best Paper @ International Conference on Cyberworlds 2018 and 2021, Best Paper @ SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2017, Best Demo @ SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications 2016 and 2017, CGF Cover Contest 2016 (Runner-up), Best Paper Award @ Expressive 2015 and 2018, Cover Image Selection for Proceedings of EG UK TP.CG - 2013, Cover Image Selection for International Journal of Heritage in the Digital Era (vol. 1, no. 1) - 2012, Hasso Plattner Institute Best Diploma (M.Sc.) - 2011, Best Paper Award @ EuroMed Conference - 2010, Best technical research student paper @ EG UK TP.CG - 2010