[
    {
        "id": "kovacs-2026-sbg",
        "type_id": "journalpaper_notalk",
        "tu_id": null,
        "repositum_id": "20.500.12708/226679",
        "title": "Style Brush: Guided Style Transfer for 3D Objects",
        "date": "2026-02-16",
        "abstract": "We introduce Style Brush, a novel style transfer method for textured meshes designed to empower artists with fine-grained control over the stylization process. Our approach extends traditional 3D style transfer methods by introducing a novel loss function that captures style directionality, supports multiple style images or portions thereof and enables smooth transitions between styles in the synthesized texture. The use of easily generated guiding textures streamlines user interaction, making our approach accessible to a broad audience. Extensive evaluations with various meshes, style images and contour shapes demonstrate the flexibility of our method and showcase the visual appeal of the generated textures. Finally, the results of a user study indicate that our approach generates visually appealing mesh textures that adhere to user-defined guidance and enable users to retain creative control during stylization. Our implementation is available on: https://github.com/AronKovacs/style-brush.",
        "authors_et_al": false,
        "substitute": null,
        "main_image": null,
        "sync_repositum_override": null,
        "repositum_presentation_id": null,
        "authors": [
            1950,
            5415,
            1410
        ],
        "articleno": "e70308",
        "doi": "10.1111/cgf.70308",
        "issn": "1467-8659",
        "journal": "Computer Graphics Forum",
        "pages": "18",
        "publisher": "WILEY",
        "research_areas": [],
        "keywords": [
            "3D style transfer",
            "directional guidance",
            "mesh texture synthesis",
            "user guidance"
        ],
        "weblinks": [],
        "files": [],
        "projects_workgroups": [],
        "url": "https://www.cg.tuwien.ac.at/research/publications/2026/kovacs-2026-sbg/",
        "__class": "Publication"
    },
    {
        "id": "Kovacs_PhD",
        "type_id": "phdthesis",
        "tu_id": null,
        "repositum_id": "20.500.12708/224033",
        "title": "3D Style Transfer: Lifting 2D Methods to 3D and Enabling Interactive Guidance",
        "date": "2026",
        "abstract": "3D style transfer refers to altering the visual appearance of 3D objects and scenes to match a given (artistic) style, usually provided as an image. 3D style transfer presents significant potential in streamlining the creation of 3D assets such as game environment props, VFX elements, or largescale virtual scenes. However, it faces challenges such as ensuring multi-view consistency, respecting computational and memory constraints, and enabling artist control. In this dissertation, we propose three methods that aim at stylizing 3D assets while addressing these challenges. We focus on optimization-based methods due to the higher quality of results compared to single-pass methods. 0ur contributions advance the state-of-the-art by introducing: (i) novel surface-aware CNN operators for direct mesh texturing, (ii) the first Gaussian Splatting (GS) method capable of transferring both high-frequency details and large scale patterns, and (iii) an interactive method that allows directional and region-based control over the stylization process. Each of these methods outperforms existing baselines in visual fidelity and robustness. Across three complementary projects, we explore different facets of 3D style transfer. In the first project, we propose a method that creates textures directly on the surface of a mesh. By replacing the standard 2D convolution and pooling layers in a pre-trained 2D CNN with surface-based operations, we achieve seamless, multi-view-consistent texture synthesis without relying on proxy 2D images. In the second project, we transfer both high-frequency and large-scale patterns using GS, while addressing representation-specific artifacts such as oversized or elongated Gaussians. Furthermore, we design a style loss capable of transferring style patterns at multiple scales, resulting in visually appealing stylized scenes that preserve both intricate details and large-scale motifs. In the third project, we propose an interactive method that allows users to guide stylization by drawing lines to control pattern direction, and painting regions on both the 3D surface and style image to specify where and how specific style patterns should be applied. Through our extensive qualitative and quantitative evaluations, we show that our methods surpass state-of-the-art techniques. We also demonstrate their robustness across diverse 3D objects, scenes, and styles, highlighting the flexibility of the presented methods. Future work may explore extensions such as geometry modification for style-driven shape changes, more efficient !arge-scale pattern synthesis, temporal coherence in dynamic or video-based scenes, and refined interactive controls informed by direct artist feedback to better integrate creative intent into the stylization pipeline.",
        "authors_et_al": false,
        "substitute": null,
        "main_image": null,
        "sync_repositum_override": "title,abstract,date,keywords,type_id",
        "repositum_presentation_id": null,
        "authors": [
            1950
        ],
        "ac_number": "AC17745734",
        "co_supervisor": [
            5572
        ],
        "date_end": "2024",
        "date_start": "2021",
        "doi": "10.34726/hss.2026.137815",
        "open_access": "yes",
        "pages": "104",
        "supervisor": [
            1410
        ],
        "research_areas": [
            "MedVis"
        ],
        "keywords": [
            "Style transfer",
            "Texture synthesis",
            "Neural Networks",
            "Neural rendering"
        ],
        "weblinks": [],
        "files": [
            {
                "description": null,
                "filetitle": "thesis",
                "main_file": true,
                "use_in_gallery": false,
                "access": "public",
                "name": "Kovacs_PhD-thesis.pdf",
                "type": "application/pdf",
                "size": 4807053,
                "path": "Publication:Kovacs_PhD",
                "url": "https://www.cg.tuwien.ac.at/research/publications/2026/Kovacs_PhD/Kovacs_PhD-thesis.pdf",
                "thumb_image_sizes": [
                    16,
                    64,
                    100,
                    175,
                    300,
                    600
                ],
                "thumb_url": "https://www.cg.tuwien.ac.at/research/publications/2026/Kovacs_PhD/Kovacs_PhD-thesis:thumb{{size}}.png"
            }
        ],
        "projects_workgroups": [
            "vis"
        ],
        "url": "https://www.cg.tuwien.ac.at/research/publications/2026/Kovacs_PhD/",
        "__class": "Publication"
    },
    {
        "id": "kovacs-2024-smt",
        "type_id": "journalpaper_notalk",
        "tu_id": null,
        "repositum_id": "20.500.12708/200040",
        "title": "Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs",
        "date": "2024-05",
        "abstract": "Mesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks—either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.",
        "authors_et_al": false,
        "substitute": null,
        "main_image": null,
        "sync_repositum_override": null,
        "repositum_presentation_id": null,
        "authors": [
            1950,
            1919,
            1410
        ],
        "articleno": "e15016",
        "doi": "10.1111/cgf.15016",
        "issn": "1467-8659",
        "journal": "Computer Graphics Forum",
        "number": "2",
        "pages": "13",
        "publisher": "WILEY",
        "volume": "43",
        "research_areas": [],
        "keywords": [
            "Deep learning (DL)",
            "Computer Graphics",
            "Texture Synthesis"
        ],
        "weblinks": [],
        "files": [],
        "projects_workgroups": [],
        "url": "https://www.cg.tuwien.ac.at/research/publications/2024/kovacs-2024-smt/",
        "__class": "Publication"
    },
    {
        "id": "kovacs-2024-gsg",
        "type_id": "journalpaper_notalk",
        "tu_id": null,
        "repositum_id": "20.500.12708/205211",
        "title": "G-Style: Stylized Gaussian Splatting",
        "date": "2024",
        "abstract": "We introduce G-Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as—compared to other approaches based on Neural Radiance Fields—it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G-Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.",
        "authors_et_al": false,
        "substitute": null,
        "main_image": null,
        "sync_repositum_override": null,
        "repositum_presentation_id": null,
        "authors": [
            1950,
            5415,
            1410
        ],
        "articleno": "e15259",
        "doi": "10.1111/cgf.15259",
        "issn": "1467-8659",
        "journal": "Computer Graphics Forum",
        "number": "7",
        "pages": "13",
        "publisher": "WILEY",
        "volume": "43",
        "research_areas": [],
        "keywords": [
            "Artificial intelligence",
            "Computer graphics",
            "Neural networks"
        ],
        "weblinks": [],
        "files": [],
        "projects_workgroups": [],
        "url": "https://www.cg.tuwien.ac.at/research/publications/2024/kovacs-2024-gsg/",
        "__class": "Publication"
    }
]
